Guide
OpenClaw + Qwen: Use Qwen 3.5 with OpenClaw
Deploy an AI agent powered by Qwen 3.5 — Alibaba's powerful open-source model, locally or in the cloud.
What Is Qwen?
Qwen is Alibaba Cloud's open-source large language model family. Qwen 3.5 is their latest release, offering excellent multilingual capabilities — particularly in Chinese, English, Japanese, and Korean — along with strong reasoning and instruction-following. It's available in multiple sizes, from lightweight 7B to the full 72B, making it suitable for everything from local laptops to production cloud deployments.
Qwen 3.5 Highlights
- Multiple sizes — Available in 7B, 14B, 32B, and 72B parameter variants, so you can match the model to your hardware and performance needs.
- Strong multilingual support — Excels at Chinese, English, Japanese, and Korean. One of the best open-source models for multilingual tasks.
- Fully open-source — Weights are freely available. Run it locally with Ollama, deploy on your own servers, or use it via cloud APIs.
- Available on Ollama and OpenRouter — Use Qwen locally for free with Ollama, or access the full 72B model via OpenRouter with no hardware requirements.
Option 1: Qwen via Ollama (Local)
Run Qwen 3.5 on your own machine using Ollama. This gives you full privacy and zero API costs.
- Install Ollama — Download from ollama.com/download for macOS, Linux, or Windows.
- Pull Qwen 3.5 — Open your terminal and run:
ollama pull qwen3.5 - Configure OpenClaw — Add Ollama as a model provider in your
openclaw.json:
"models": {
"providers": {
"ollama": {
"apiBase": "http://localhost:11434/v1"
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ollama/qwen3.5"
}
}
}http://host.docker.internal:11434/v1 instead of localhost.Option 2: Qwen via OpenRouter (Cloud)
Access the full Qwen 3.5 72B model through OpenRouter — no hardware needed. Just add your OpenRouter API key and set the model:
"models": {
"providers": {
"openrouter": {
"apiKey": "sk-or-..."
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "openrouter/qwen/qwen-3.5-72b"
}
}
}OpenRouter handles all the infrastructure — you pay per token with no minimum commitment. This is the easiest way to use the full 72B model without a powerful GPU.
Option 3: Qwen on OpenClaw Launch
The fastest way to get started. On OpenClaw Launch, select Qwen from the model dropdown in the visual configurator, choose your chat platform (Telegram, Discord, or Web), and deploy in 30 seconds. No config files, no command line, no hardware.
- Go to openclawlaunch.com and open the configurator.
- Select Qwen 3.5 from the model dropdown.
- Choose your chat platform and paste your bot token.
- Click Deploy — your Qwen-powered agent is live.
Qwen vs Other Models
Not sure if Qwen is the right choice? Here's how it compares to other popular models supported by OpenClaw:
| Model | Provider | Strengths | Open Source |
|---|---|---|---|
| Claude Sonnet 4.6 | Anthropic | Best writing, nuanced conversation | No |
| GPT-5.2 | OpenAI | Strong all-rounder, wide knowledge | No |
| DeepSeek R1 | DeepSeek | Best value for coding and math | Yes |
| Qwen 3.5 72B | Alibaba | Excellent multilingual, open-source | Yes |
You can switch models anytime without redeploying. See our Models page for a full comparison.
When to Choose Qwen
Qwen 3.5 is an excellent choice in several scenarios:
- Multilingual tasks — If you need strong Chinese, Japanese, or Korean language support alongside English, Qwen is one of the best options available.
- Chinese language focus — Qwen outperforms most other models on Chinese language understanding, generation, and conversation.
- Open-source preference — If you want full control over your AI model with freely available weights, Qwen 3.5 delivers top-tier performance.
- Local deployment — The smaller Qwen variants (7B, 14B) run well on consumer hardware via Ollama, giving you a capable local AI agent with zero API costs.
For pure English-language tasks where you want the highest quality, consider Gemini or Claude. For coding and math, DeepSeek R1 is also strong. But for multilingual flexibility and open-source freedom, Qwen 3.5 is hard to beat.