← Home

Installation Guide

How to Install the Hermes Agent

Hermes is an open-source AI agent runtime that connects large language models to channels like Telegram, Discord, and WhatsApp. This guide covers the two main ways to get it running on your own machine — the official bash installer (recommended) or Docker — plus how to skip all of it with a managed deployment.

Before You Start

Before installing Hermes, make sure your machine meets these baseline requirements:

  • Git — the only required prerequisite for the bash installer. The installer handles Python 3.11, Node.js v22, uv, ripgrep, and ffmpeg automatically (no sudo needed for Python).
  • Linux, macOS, WSL2, or Android (Termux) — the installer supports all of these. Native Windows is not supported; use WSL2 instead. See the Windows guide for WSL2 setup details.
  • ~2 GB of disk space for the Docker image (if using the Docker path). The exact size varies by release; plan for at least 2 GB free.
  • RAM — Hermes itself is lightweight, but the language model you point it at may require substantial RAM if running locally (e.g., Ollama or LM Studio). For cloud-hosted models (OpenAI, Anthropic, OpenRouter), 1–2 GB available RAM is typically sufficient.
  • A model provider API key (OpenAI, Anthropic, OpenRouter, or a locally running model endpoint). Hermes will not start meaningfully without one.
  • Network access on port 8642 (the OpenAI-compatible API server and gateway, if you enable it) and port 9119 (the dashboard, if you run it separately).

Once those are in place, pick one of the install paths below.

Option A: Install Hermes via the Official Bash Installer

The recommended way to install Hermes is the official one-liner installer. It works on Linux, macOS, WSL2, and Android (Termux), and handles all runtime dependencies automatically — no manual Python or Node.js setup required.

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

The installer automatically provisions: Python 3.11 (via uv, no sudo), Node.js v22 (for browser automation and WhatsApp), uv, ripgrep, and ffmpeg. The only prerequisite is Git.

After installation, start the gateway with:

hermes gateway

Configuration lives in ~/.hermes/.env. Set your model provider keys there (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY) and any platform tokens (e.g., TELEGRAM_BOT_TOKEN) before starting.

Option B: Install Hermes via Docker

Docker gives you a self-contained environment, avoids local dependency management, and makes it easy to run Hermes alongside other services.

The image is published to Docker Hub as nousresearch/hermes-agent. The canonical run command is:

docker run -d \
  --name hermes \
  --restart unless-stopped \
  -v ~/.hermes:/opt/data \
  -p 8642:8642 \
  nousresearch/hermes-agent gateway run

The host directory ~/.hermes/ maps to /opt/data inside the container. Place your .env file there before starting. Required environment variables include ANTHROPIC_API_KEY or OPENAI_API_KEY (depending on your provider), plus any platform tokens. You can pass them via -e VAR=value flags or by putting them in ~/.hermes/.env.

After the container starts, check it is running:

docker ps | grep hermes
docker logs hermes --tail 50

The logs will show which platforms initialized successfully and whether the model provider connection succeeded.

Option C: Run via Docker Compose

If you prefer a Compose file that declares Hermes alongside other services (a database, a reverse proxy, etc.), see our dedicated guide: Hermes Agent Docker Compose Setup. That guide covers the full Compose file, volume mounts, environment variables, and how to wire up an Nginx or Caddy proxy in front of the gateway.

Enabling the API Server

Hermes exposes an OpenAI-compatible API server on port 8642. It is enabled via environment variables in ~/.hermes/.env:

API_SERVER_ENABLED=true
API_SERVER_KEY=change-me-local-dev

Then start (or restart) the gateway:

hermes gateway

Additional env vars: API_SERVER_PORT (default 8642), API_SERVER_HOST (default 127.0.0.1), and API_SERVER_CORS_ORIGINS.

Note: config.yaml support for the API server is listed as coming in a future release upstream — for now, env vars are the only supported configuration path. See the upstream docs for the full endpoint list (POST /v1/chat/completions, POST /v1/responses, GET /v1/models, GET /health).

For session continuity across API calls, use the previous_response_id field or the conversation parameter in your requests.

Running the Dashboard

The Hermes dashboard is a separate command, not part of the gateway process. Run it with:

hermes dashboard

The dashboard runs on port 9119 and binds to 127.0.0.1 only by default. There is no built-in authentication — it relies on the localhost binding to restrict access. Do not expose port 9119 publicly without adding your own auth layer in front of it.

The dashboard requires the hermes-agent[web] extras package. If hermes dashboard errors with a missing module, install it with:

pip install hermes-agent[web]

Tabs available in the dashboard: Status, Config, API Keys, Sessions, Logs, Analytics, Cron, Skills.

Configuring Your First Agent

Once Hermes is running, the minimum configuration to get a working agent involves three things:

  1. Set your model provider. Hermes supports OpenAI, Anthropic, OpenRouter, Ollama, LM Studio, and several others. Set the relevant API key in ~/.hermes/.env. For OpenRouter, prefix the model ID with the provider name (e.g., openrouter/anthropic/claude-sonnet-4.6).
  2. Enable at least one channel. Hermes can connect to Telegram, Discord, WhatsApp, Slack, and others. Each platform requires its own bot token or API credentials. Add those to ~/.hermes/.env and refer to the upstream docs for the exact variable names per platform.
  3. Start the gateway. Run hermes gateway (bare install) or restart the Docker container after editing your .env. Config changes are not hot-reloaded — a restart is required.

After the gateway restarts, send your bot a message on the channel you enabled. Hermes routes it to your model and replies in the same thread.

Common Install Pitfalls

Docker socket permissions

If you run the Docker command and get a permission denied error on the Docker socket, your user is not in the docker group. Fix it with:

sudo usermod -aG docker $USER
# then log out and back in, or:
newgrp docker

Port 8642 already in use

Port 8642 is the default for the Hermes OpenAI-compatible API server and gateway. If something else is using that port, the container will fail to bind. Either stop the other process or remap the port in your Docker run command with -p <host-port>:8642. Check what is using the port with lsof -i :8642 (macOS/Linux) or netstat -ano | findstr :8642 (Windows/WSL2).

Missing model provider API key

Hermes will start but fail to respond to messages if the model provider key is missing or invalid. Check the container logs immediately after start:

docker logs hermes --tail 100

Look for authentication errors or connection refused messages pointing at your provider's API endpoint. Double-check the key value and that the correct environment variable name is used for your provider.

Config changes do not take effect

Hermes does not hot-reload .env changes. After any config edit, restart the gateway process or the container:

docker restart hermes

Native Windows is not supported

The official installer does not support native Windows. Use WSL2 instead. See the Windows guide for step-by-step WSL2 setup.

The Easy Way: Use OpenClaw Launch

All of the steps above — pulling the image, writing config, mapping ports, managing restarts, handling Docker socket permissions, and keeping the version pinned — are handled for you by OpenClaw Launch. You pick your model provider, connect your channels through a visual UI, and get a running Hermes instance in about a minute, no terminal required. The image is always pinned to a tested release, updates are applied without downtime, and rollback takes one click.

Self-InstallOpenClaw Launch
Install stepsBash installer or Docker pull, config, port mapping, firewallNone — fill a form, click Deploy
Time to first reply30–90 min (first time)~1 minute
Version pinManual — easy to forgetAlways pinned to tested release
UpdatesRe-run installer or pull new imageOne-click, zero downtime
RollbackManually re-pull previous tagOne click
ChannelsEdit .env + restartVisual UI, no config files
BillingVPS cost + your timeFrom $3/mo

Frequently Asked Questions

What is the Hermes agent?

Hermes is an open-source AI agent runtime developed by NousResearch. It connects large language models to chat platforms (Telegram, Discord, WhatsApp, Slack, and more), provides an OpenAI-compatible API server on port 8642, supports web search, image generation, shell execution, and a scheduling system — all configurable through environment variables and a web dashboard on port 9119.

Do I need Docker to install Hermes?

No — the official bash one-liner installer is the recommended path on Linux, macOS, WSL2, and Android (Termux). It handles all runtime dependencies automatically. Docker is a good alternative if you prefer containerized environments or need to run Hermes alongside other services. Native Windows is not supported by the installer; use WSL2.

Which model providers does Hermes support?

Hermes supports OpenAI, Anthropic, OpenRouter, Ollama, LM Studio, and several others. The full list of supported providers and the environment variable names for each are in the upstream docs.

Does Hermes hot-reload config changes?

No. Every config change requires restarting the gateway process or container. This applies to both .env changes and any platform token updates — plan for brief downtime when reconfiguring channels or model settings.

What port does Hermes use?

The OpenAI-compatible API server and gateway run on port 8642 by default. The dashboard (hermes dashboard) runs on port 9119, bound to localhost only. Both ports can be remapped via Docker port flags or env vars (API_SERVER_PORT for 8642).

Related Guides

Skip the Install

Get a managed Hermes instance running in about a minute — no Docker, no config files, no port mapping.

Deploy with OpenClaw Launch