Guide
OpenClaw + Kimi: Use Kimi K2.5 with OpenClaw
Deploy an AI agent powered by Kimi K2.5 — Moonshot AI's flagship model with exceptional long-context capabilities.
What Is Kimi?
Kimi is the flagship AI model from Moonshot AI, a leading Chinese AI company. Kimi is known for its extremely long context windows (up to 1M tokens), strong multilingual support — especially Chinese and English — and competitive reasoning capabilities. The latest version, Kimi K2.5, delivers strong performance across document analysis, summarization, and general conversation tasks.
Kimi K2.5 Highlights
- 1M token context window — process entire books, codebases, or lengthy documents in a single conversation
- Strong multilingual support — excels at both Chinese and English, making it ideal for bilingual users
- Competitive pricing — significantly lower cost per token compared to leading Western models
- Document analysis and summarization — particularly effective at extracting insights from long-form content
How to Use Kimi with OpenClaw Launch
The easiest way to deploy a Kimi-powered AI agent is through OpenClaw Launch. No server setup, no config files — just point and click.
- Go to openclawlaunch.com and open the configurator.
- Select Kimi K2.5 from the model dropdown.
- Pick your chat platform (Telegram, Discord, or Web) and paste your bot token.
- Click Deploy — your Kimi-powered agent is live in 30 seconds.
How to Use Kimi Self-Hosted
If you're self-hosting OpenClaw, configure the OpenRouter provider and set Kimi K2.5 as your model in openclaw.json:
{
"models": {
"providers": {
"openrouter": {
"apiKey": "sk-or-..."
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "openrouter/moonshotai/kimi-k2.5"
}
}
}
}Kimi K2.5 is available through OpenRouter, which provides unified billing and easy model switching across dozens of providers.
When to Choose Kimi
Kimi K2.5 is an excellent choice in specific scenarios:
- Long document analysis — the 1M token context window handles entire research papers, legal documents, or codebases without chunking
- Multilingual users — especially strong for Chinese-English bilingual workflows and translation tasks
- Budget-conscious users — competitive token pricing makes it a cost-effective alternative for general-purpose tasks
For the most complex reasoning or advanced coding tasks, models like Claude or GPT may still have an edge. But for long-context work, multilingual conversations, and everyday AI assistance, Kimi K2.5 is a strong contender.
Kimi vs Other Models
| Model | Best For | Context |
|---|---|---|
| Kimi K2.5 | Long docs, multilingual (Chinese) | 1M tokens |
| Claude Sonnet | Writing, nuanced conversation | 200K tokens |
| GPT-4o | All-rounder, broad knowledge | 128K tokens |
| Gemini 2.5 Pro | Long context, multimodal | 1M tokens |
| DeepSeek R1 | Coding, math, reasoning | 64K tokens |
You can switch models anytime without redeploying. See our Models page for a full comparison.