Guide
Hermes Agent + GLM: Use Z.AI / GLM Models with Hermes
Z.AI's GLM models bring SWE-Bench-leading coding performance to Hermes Agent. GLM-5.1 scored 74.2 on SWE-Bench — one of the highest published scores for software engineering tasks among any model available via API.
What Is Z.AI GLM?
Z.AI (formerly Zhipu AI) is a Chinese AI lab that develops the GLM (General Language Model) family. Their models have consistently ranked among the top performers on software engineering benchmarks. GLM-5.1 achieved a SWE-Bench Verified score of 74.2, placing it ahead of many Western frontier models on coding tasks. The GLM family also performs strongly in Chinese language, multilingual reasoning, and long-context document tasks.
Hermes Agent supports GLM through its built-in Z.AI provider, which accepts either GLM_API_KEY or ZAI_API_KEY and routes to https://api.z.ai/api/paas/v4.
Available GLM Models
| Model ID | Best For | Notable Score |
|---|---|---|
z-ai/glm-5.1 | Coding, SWE tasks, complex reasoning | SWE-Bench 74.2 |
z-ai/glm-4.7 | General tasks, Chinese language, cost efficiency | — |
For agent-heavy coding workloads, GLM-5.1 is the strongest pick in the GLM lineup. Its SWE-Bench score of 74.2 means it resolves software issues from real-world GitHub repos at a rate that exceeds most competing models. If you primarily use Hermes for code review, debugging, or agentic development tasks, GLM-5.1 is worth comparing directly against Claude Sonnet and GPT-5.5.
Option 1: Hermes Agent on OpenClaw Launch (Easiest)
GLM-5.1 is available in the OpenClaw Launch model picker. No API key setup needed.
- Go to openclawlaunch.com/hermes-hosting and start a Hermes deploy.
- Select GLM-5.1 from the model dropdown.
- Connect your channel and click Deploy. Your GLM-powered Hermes Agent is live in roughly 10 seconds.
Option 2: Z.AI API Direct (Self-Hosted)
Hermes reads either GLM_API_KEY or ZAI_API_KEY (both are accepted) and routes to the Z.AI API:
# Set either GLM_API_KEY or ZAI_API_KEY
export GLM_API_KEY=your-key-here
# or: export ZAI_API_KEY=your-key-here
hermes inference set z-ai
hermes model set glm-5.1
# config.yaml equivalent:
# inference:
# provider: z-ai
# model:
# default: glm-5.1Get your API key from the Z.AI Open Platform. The API endpoint is https://api.z.ai/api/paas/v4.
Option 3: GLM via OpenRouter (Self-Hosted)
GLM models are also available through OpenRouter if you prefer a single key across all providers:
export OPENROUTER_API_KEY=sk-or-...
hermes inference set openrouter
hermes model set z-ai/glm-5.1GLM vs DeepSeek for Coding Workloads
| Model | SWE-Bench Verified | Cost (Input) | General Reasoning |
|---|---|---|---|
| GLM-5.1 | 74.2 | Low | Strong |
| DeepSeek V4 Pro | Strong (SWE leaderboard) | ~$0.14/M | Strong |
| Claude Sonnet 4.6 | Competitive | ~$3/M | Excellent |
Both GLM-5.1 and DeepSeek V4 Pro are strong coding models with low pricing compared to Anthropic or OpenAI. If SWE-Bench score is your primary metric, GLM-5.1 holds the higher published number. For general agent workloads beyond coding, both are competitive. Try both via Hermes's /model switch to find what works best for your tasks.
Switching to GLM at Runtime
/model z-ai/glm-5.1
/model z-ai/glm-4.7What's Next?
- Hermes Agent + DeepSeek — Another cost-efficient frontier model with strong coding
- Hermes Agent + OpenRouter — Access GLM, DeepSeek, Claude, and 200+ models with one key
- Hermes Agent + MCP — Extend your GLM agent with MCP tool servers (GitHub, filesystem, etc.)
- Hermes Agent + Claude — Compare GLM with Claude Sonnet for Hermes workloads