Guide
OpenClaw Memory — How Your AI Agent Remembers
Understanding how memory works in OpenClaw is key to getting the most out of your AI agent. This guide explains conversation history, long-term memory, context windows, and session persistence — plus how to configure memory settings.
How Does OpenClaw Memory Work?
OpenClaw AI agents have multiple layers of memory that work together to provide context-aware conversations:
- Conversation history — The messages exchanged in the current session. The agent remembers everything said in the current conversation.
- Long-term memory — Persistent notes and facts the agent saves across sessions. The agent can remember your preferences, past decisions, and important context.
- Context window — The maximum amount of text the underlying AI model can process at once. This is determined by the model you choose (e.g., Claude, GPT, Gemini).
Conversation History
During a conversation, your OpenClaw agent maintains a running history of all messages. This includes:
- Your messages to the agent
- The agent's responses
- Results from skill executions (web searches, file reads, etc.)
- System messages and notifications
The conversation history is sent to the AI model with each new message, so the agent has full context of what was discussed. However, this history is limited by the model's context window.
Context Window Limits
Each AI model has a maximum context size. When the conversation exceeds this limit, older messages are automatically summarized or dropped:
| Model | Context Window | Approx. Messages |
|---|---|---|
| Claude Sonnet 4.6 | 200K tokens | ~200–500 messages |
| GPT-4o | 128K tokens | ~150–350 messages |
| Gemini 2.5 Pro | 1M tokens | ~1,000+ messages |
| DeepSeek V3 | 64K tokens | ~80–200 messages |
For most conversations, the context window is more than enough. For very long sessions, the agent automatically manages context to keep the most relevant information.
Long-Term Memory
Long-term memory lets your OpenClaw agent remember information across separate conversations. This is especially useful for:
- Personal preferences — “I prefer concise answers” or “Always use metric units”
- Project context — “I'm working on a React app called MyProject”
- Facts about you — “I live in Berlin” or “My timezone is CET”
- Past decisions — “We decided to use PostgreSQL for the database”
How to Use Long-Term Memory
You can explicitly tell your agent to remember something:
- “Remember that I prefer dark mode interfaces”
- “Save this: my API key for Tavily is tvly-xxx”
- “Remember my project uses Next.js 15 and Drizzle ORM”
The agent stores these as persistent notes and automatically includes relevant memories when they are useful in future conversations.
Memory Management Commands
"What do you remember about me?" # List stored memories
"Forget that I live in Berlin" # Remove a specific memory
"Update my timezone to PST" # Modify an existing memorySession Isolation
OpenClaw supports session isolation to keep conversations separate between different users or channels. This is configured via the session.dmScope setting:
- per-channel-peer (recommended) — Each user in each channel gets their own isolated conversation. User A's messages on Telegram do not affect User B's conversation.
- per-channel — All users in the same channel share one conversation. Useful for group chats where shared context is wanted.
{
"session": {
"dmScope": "per-channel-peer"
}
}Session isolation also applies to long-term memory — each user's memories are kept separate from other users' memories.
Memory Persistence Across Restarts
When your OpenClaw instance restarts (e.g., after a config change or server reboot), the behavior depends on your setup:
- Conversation history — Cleared on restart. The agent starts fresh with no conversation context. This is normal and expected.
- Long-term memory — Persists across restarts. The agent retains all saved memories because they are stored on disk.
- Skill data — Persists across restarts. Installed skills and their data files remain intact.
~/.openclaw directory must be bind-mounted as a Docker volume. Without a volume mount, all data (including memories) is lost when the container is removed. Use -v ~/.openclaw:/home/node/.openclaw in your Docker run command.Tips for Better Memory Usage
- Be explicit about what to remember — Instead of hoping the agent picks up on important details, directly say “Remember this.”
- Use a model with a large context window — Claude Sonnet (200K) or Gemini 2.5 Pro (1M tokens) handle long conversations much better than smaller models.
- Start new conversations for new topics — Don't overload a single conversation with unrelated topics. Start fresh and let long-term memory carry the persistent context.
- Periodically review saved memories — Ask “What do you remember about me?” to check stored facts and correct any outdated information.
Memory on OpenClaw Launch
On OpenClaw Launch, memory is fully managed:
- Long-term memory persists automatically — no Docker volumes to configure
- Memory survives instance restarts and redeployments
- Session isolation is pre-configured per user
- Choose from models with large context windows (Claude, GPT, Gemini)
Your agent's memories, conversation history, and skill data are all stored securely and persist across deployments. No manual configuration needed.
Frequently Asked Questions
Does OpenClaw remember previous conversations?
OpenClaw has two types of memory. Conversation history is limited to the current session and cleared on restart. Long-term memory persists across sessions — the agent remembers facts you explicitly ask it to save. Long-term memory is stored on disk and survives restarts.
How do I make OpenClaw remember something?
Simply tell your agent: “Remember that [fact]”. For example: “Remember that I prefer Python over JavaScript” or “Save my project name: MyApp”. The agent stores this in long-term memory and automatically uses it in future conversations.
Why did my OpenClaw agent forget our conversation?
Conversation history is cleared when the OpenClaw instance restarts. This is normal behavior. Important information should be saved to long-term memory with an explicit “Remember this” command. Long-term memories persist across restarts.
What is the OpenClaw context window?
The context window is the maximum amount of text the AI model can process at once. It determines how many messages the agent can “see” in a conversation. Claude Sonnet has a 200K token context window (~200–500 messages), while Gemini 2.5 Pro offers 1M tokens (~1,000+ messages).
Can different users share memories?
By default with dmScope: “per-channel-peer”, each user has isolated memory. To share context in a group setting, use dmScope: “per-channel”, which gives all users in a channel shared conversation history and memory.
Related Guides
- OpenClaw AI Agent Guide — Complete guide to what your agent can do
- OpenClaw Manual — Chat commands and daily usage
- OpenClaw ClawHub Guide — Install skills to extend memory capabilities
- What is OpenClaw? — Complete overview of the framework