← Home

Guide

Hermes Agent + GLM: Use Z.AI / GLM Models with Hermes

Z.AI's GLM models bring SWE-Bench-leading coding performance to Hermes Agent. GLM-5.1 scored 74.2 on SWE-Bench — one of the highest published scores for software engineering tasks among any model available via API.

What Is Z.AI GLM?

Z.AI (formerly Zhipu AI) is a Chinese AI lab that develops the GLM (General Language Model) family. Their models have consistently ranked among the top performers on software engineering benchmarks. GLM-5.1 achieved a SWE-Bench Verified score of 74.2, placing it ahead of many Western frontier models on coding tasks. The GLM family also performs strongly in Chinese language, multilingual reasoning, and long-context document tasks.

Hermes Agent supports GLM through its built-in Z.AI provider, which accepts either GLM_API_KEY or ZAI_API_KEY and routes to https://api.z.ai/api/paas/v4.

Available GLM Models

Model IDBest ForNotable Score
z-ai/glm-5.1Coding, SWE tasks, complex reasoningSWE-Bench 74.2
z-ai/glm-4.7General tasks, Chinese language, cost efficiency

For agent-heavy coding workloads, GLM-5.1 is the strongest pick in the GLM lineup. Its SWE-Bench score of 74.2 means it resolves software issues from real-world GitHub repos at a rate that exceeds most competing models. If you primarily use Hermes for code review, debugging, or agentic development tasks, GLM-5.1 is worth comparing directly against Claude Sonnet and GPT-5.5.

Option 1: Hermes Agent on OpenClaw Launch (Easiest)

GLM-5.1 is available in the OpenClaw Launch model picker. No API key setup needed.

  1. Go to openclawlaunch.com/hermes-hosting and start a Hermes deploy.
  2. Select GLM-5.1 from the model dropdown.
  3. Connect your channel and click Deploy. Your GLM-powered Hermes Agent is live in roughly 10 seconds.
Tip: GLM-5.1's strong coding performance makes it especially useful for Hermes agents that use shell execution, code review, or GitHub MCP tools.

Option 2: Z.AI API Direct (Self-Hosted)

Hermes reads either GLM_API_KEY or ZAI_API_KEY (both are accepted) and routes to the Z.AI API:

# Set either GLM_API_KEY or ZAI_API_KEY
export GLM_API_KEY=your-key-here
# or: export ZAI_API_KEY=your-key-here

hermes inference set z-ai
hermes model set glm-5.1

# config.yaml equivalent:
# inference:
#   provider: z-ai
# model:
#   default: glm-5.1

Get your API key from the Z.AI Open Platform. The API endpoint is https://api.z.ai/api/paas/v4.

Option 3: GLM via OpenRouter (Self-Hosted)

GLM models are also available through OpenRouter if you prefer a single key across all providers:

export OPENROUTER_API_KEY=sk-or-...

hermes inference set openrouter
hermes model set z-ai/glm-5.1

GLM vs DeepSeek for Coding Workloads

ModelSWE-Bench VerifiedCost (Input)General Reasoning
GLM-5.174.2LowStrong
DeepSeek V4 ProStrong (SWE leaderboard)~$0.14/MStrong
Claude Sonnet 4.6Competitive~$3/MExcellent

Both GLM-5.1 and DeepSeek V4 Pro are strong coding models with low pricing compared to Anthropic or OpenAI. If SWE-Bench score is your primary metric, GLM-5.1 holds the higher published number. For general agent workloads beyond coding, both are competitive. Try both via Hermes's /model switch to find what works best for your tasks.

Switching to GLM at Runtime

/model z-ai/glm-5.1
/model z-ai/glm-4.7

What's Next?

Deploy Hermes with GLM

Get a GLM-5.1-powered Hermes Agent running in 10 seconds on OpenClaw Launch.

Deploy Hermes