← All News

OpenClaw v2026.4.5: Dream Diary, Video & Music Generation Tools

Source: GitHub

OpenClaw v2026.4.5, released on April 6, 2026, introduces one of the most ambitious features in the project history: a biologically inspired memory system that dreams to consolidate knowledge, alongside built-in creative media generation tools.

Memory Dreaming System

The experimental dreaming framework implements three cooperative phases modeled after human sleep: light, deep, and REM. During these phases, the agent processes and consolidates its accumulated memories, promoting important short-term memories to long-term storage through weighted short-term recall promotion.

Users can access the system via the new /dreaming command and a Dream Diary surface in the Dreams UI. Configurable parameters like recencyHalfLifeDays and maxAgeDays let users tune how quickly memories decay — giving agents a more human-like relationship with their accumulated knowledge.

Video and Music Generation

Agents can now generate media directly with built-in video_generate and music_generate tools. Video generation supports xAI (grok-imagine-video), Runway, and Alibaba providers, with ComfyUI bundled workflows for local processing. Music creation is powered by Google Lyria and MiniMax.

Provider Expansions

New bundled providers include Qwen, Fireworks AI, StepFun, and Amazon Bedrock Mantle with automatic request-region injection. GitHub Copilot now routes through the Anthropic Messages API for improved compatibility.

Multilingual Control UI

The Control UI now supports 15 languages including Simplified and Traditional Chinese, Portuguese, German, Spanish, Japanese, Korean, French, Turkish, Indonesian, Polish, and Ukrainian — reflecting OpenClaw truly global user base.

Try these features on a managed instance at OpenClaw Launch.

Build with OpenClaw

Deploy your own AI agent in under 10 seconds — no servers, no CLI.

Deploy Now