Setup Guide
OpenClaw + Tailscale: Secure Remote Access Without Port Forwarding
Use Tailscale to securely reach your self-hosted OpenClaw from anywhere — connect your home GPU to a remote instance, access the gateway UI privately, and never expose a port to the public internet.
What Is Tailscale?
Tailscale is a mesh VPN built on WireGuard, the modern encrypted tunneling protocol. Unlike traditional VPNs that route all traffic through a central server, Tailscale creates direct peer-to-peer connections between your devices. Each device in your "tailnet" gets a stable private IP (in the 100.x.x.x range) that stays the same regardless of your physical location or network. Setup takes under five minutes and requires no manual firewall rules, no port forwarding, and no public IP.
Why Use Tailscale with OpenClaw?
Self-hosted OpenClaw instances are typically only accessible on the local network. Getting remote access normally requires opening firewall ports, setting up dynamic DNS, or paying for a reverse proxy service. Tailscale eliminates all of that:
- Access OpenClaw from anywhere — Reach your home or office OpenClaw instance from your phone, laptop, or any other Tailscale device. No public exposure needed.
- Connect local Ollama to a remote OpenClaw — Run Ollama on a machine with a powerful GPU at home and point a cloud or remote OpenClaw instance at it via the Tailscale IP. Your GPU does the inference; OpenClaw handles the agent logic.
- No port forwarding — Your router never needs to be touched. Tailscale handles NAT traversal automatically, even through double-NAT or CGNAT connections.
- Encrypted tunnel — All traffic between Tailscale nodes is encrypted with WireGuard. Nobody between your devices can intercept or inspect it.
- Stable IPs across reboots — Device IPs in your tailnet never change, so your OpenClaw config stays valid indefinitely.
Use Cases
1. Access OpenClaw web UI remotely
You run OpenClaw on a home server or NAS. With Tailscale installed on both the server and your laptop, you can open the OpenClaw gateway UI at http://100.x.x.x:18789 from anywhere — coffee shop, hotel, office — as if you were on your home network. No VPN client configuration, no split tunneling headaches.
2. Connect home GPU to cloud OpenClaw
You have a gaming PC with a capable GPU running Ollama at home, and a small VPS or OpenClaw Launch instance in the cloud. Install Tailscale on both, then point your cloud OpenClaw config to your home machine's Tailscale IP as the Ollama API base. You get cloud availability and reliability with local GPU inference — zero API costs.
3. Secure multi-node setups
Running OpenClaw across multiple servers — one for the gateway, one for heavy inference, one as a relay? Tailscale gives each node a stable private address and lets them communicate without exposing any ports to the internet. Define ACLs in the Tailscale admin console to control exactly which nodes can reach which services.
How to Set Up Tailscale
- Create a free Tailscale account — Sign up at tailscale.com. The free plan supports up to 100 devices and 3 users — more than enough for personal use.
- Install Tailscale on your first machine — Download from tailscale.com/download. On Linux (Debian/Ubuntu):
curl -fsSL https://tailscale.com/install.sh | sh sudo tailscale up
On macOS and Windows, download the GUI app and sign in. - Install Tailscale on your second machine — Repeat the same install on every device you want to connect (home server, VPS, laptop, phone).
- Find your Tailscale IPs — Run
tailscale ipon each device or check the Tailscale admin console to see all connected machines and their stable100.x.x.xaddresses. - Verify connectivity — From one machine, ping another by its Tailscale IP:
ping 100.x.x.x. If it responds, your tailnet is working.
Connect Local Ollama to Remote OpenClaw via Tailscale
This lets you run Ollama on a machine with a powerful GPU (e.g., your home PC) and use it as the AI backend for an OpenClaw instance running elsewhere (VPS, cloud, another room).
Step 1: Install Tailscale and Ollama on the GPU machine
Install Tailscale as above, then install Ollama and pull a model:
ollama pull llama3.3 ollama serve
Step 2: Note your GPU machine's Tailscale IP
tailscale ip # Example output: 100.64.1.42
Step 3: Configure OpenClaw to use the Tailscale IP
On your remote OpenClaw instance, update openclaw.json to point the Ollama provider at your GPU machine's Tailscale IP:
"models": {
"providers": {
"ollama": {
"apiBase": "http://100.64.1.42:11434/v1"
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ollama/llama3.3"
}
}
}Step 4: Restart OpenClaw
Restart your OpenClaw container to pick up the new config. All inference requests will now travel over the encrypted Tailscale tunnel to your home GPU.
localhost. To accept connections from Tailscale, set the environment variable OLLAMA_HOST=0.0.0.0 before starting Ollama, or edit the systemd service file. Tailscale's WireGuard encryption means this is safe — only tailnet devices can reach it.Access OpenClaw Gateway UI Through Tailscale
If OpenClaw is running on a machine in your tailnet, you can access its web gateway from any other tailnet device — no public DNS record or open port required.
- Find the Tailscale IP of the machine running OpenClaw:
tailscale ipon that machine. - From another tailnet device, open your browser to
http://100.x.x.x:18789(replacing with the actual IP and port). - Enter your gateway token to authenticate. The UI loads privately over the WireGuard tunnel — the port is never exposed to the public internet.
For a custom domain experience, you can also use Tailscale's HTTPS support with MagicDNS to access the gateway at a name like https://my-server.tail1234.ts.net:18789.
Security Benefits
Tailscale adds a meaningful security layer to any OpenClaw deployment:
- WireGuard encryption — Every packet between tailnet devices is encrypted with modern cryptography. Eavesdroppers on public Wi-Fi see nothing.
- No open ports — Your OpenClaw gateway port never needs to be exposed on your router or cloud firewall. Attack surface is drastically reduced.
- Device authentication — Only devices you explicitly approve in the Tailscale admin console can join your tailnet. No anonymous access.
- ACL policies — Tailscale's access control lists (ACLs) let you specify exactly which devices can reach which ports. You can isolate OpenClaw so only your laptop can connect, not every device in your tailnet.
- Key expiry and device management — Revoke access for lost or compromised devices instantly from the Tailscale admin console.
Tailscale vs. Other Remote Access Methods
| Tailscale | Port Forwarding | Cloudflare Tunnel | Ngrok | |
|---|---|---|---|---|
| Setup difficulty | Very easy | Medium | Easy | Very easy |
| Open ports needed | None | Yes — public exposure | None | None |
| Traffic encryption | WireGuard (E2E) | Depends on app TLS | TLS (via Cloudflare) | TLS (via Ngrok) |
| Third-party traffic routing | No — direct P2P | No | Yes — via Cloudflare | Yes — via Ngrok |
| Works behind CGNAT | Yes | No | Yes | Yes |
| Cost | Free (up to 100 devices) | Free | Free tier available | Free tier (limited) |
| Latency | Very low (direct P2P) | Very low (direct) | Low (CDN edge) | Medium (relay servers) |
| Best for | Private device networks | Simple static setups | Public HTTP services | Quick temporary tunnels |
For OpenClaw specifically, Tailscale is the best choice when you want private access from your own devices. Cloudflare Tunnel is better if you want to expose the OpenClaw gateway to the public internet (though that requires careful authentication setup). Port forwarding works but exposes your IP and port to scanners. Ngrok is convenient for short-term testing but not suitable for persistent production use.
What's Next
With Tailscale connected, explore these related guides:
- OpenClaw + Ollama — Run local AI models and connect them to OpenClaw for free, private inference.
- Deploy OpenClaw on a VPS — Self-host OpenClaw on a remote server and use Tailscale to access it privately.
- Gateway Token Setup — Secure your OpenClaw gateway with a token before exposing it on any network.
- OpenClaw Docker Setup — Learn the full Docker run command and configuration options for self-hosted deployments.