Deploy OpenClaw from your browser — to local containers or Kubernetes.
git clone https://github.com/sallyom/openclaw-installer.git
cd openclaw-installer
npm install && npm run build && npm run devOpen http://localhost:3000, pick your deploy target, fill in the form, and click Deploy.
The installer now always uses upstream OpenClaw SecretRefs where it can.
- Local deploys inject secrets as container environment variables and reference them from
openclaw.json - Local Podman deploys can optionally derive those env vars from a guided Podman secret mapping list instead of hand-writing
--secret ...flags - Kubernetes and OpenShift deploys store secrets in the installer-managed
openclaw-secretsSecret, inject them withsecretKeyRef, and reference them fromopenclaw.json - OpenAI Codex uses ChatGPT OAuth from the Codex CLI
auth.jsoninstead of an API key; the installer imports it into the conventional OpenClaw auth profileopenai-codex:default - You can still provide explicit SecretRef overrides and optional
secrets.providersJSON forenv,file, orexec-based setups such as Vault
This keeps raw third-party secrets out of generated openclaw.json while staying aligned with upstream OpenClaw secret handling.
For local Podman installs, the recommended path is: create Podman secrets, map them in the installer, and let OpenClaw resolve them through SecretRefs. See docs/podman-secrets.md.
Codex OAuth is handled separately: leave the Codex auth path blank to use ~/.codex/auth.json, or provide the path to a Codex CLI auth.json.
./run.shUseful variants:
./run.sh --build
./run.sh --port 8080
./run.sh --runtime docker
./run.sh --plugin @acme/openclaw-installer-aws
./run.sh --plugins @acme/openclaw-installer-aws,@acme/openclaw-installer-gkerun.sh now prefers OPENCLAW_INSTALLER_IMAGE, while still accepting the older CLAW_INSTALLER_IMAGE.
| Target | Guide | What it does |
|---|---|---|
| Kubernetes | deploy-kubernetes.md | Creates namespace, PVC, ConfigMaps, Secrets, Service, and Deployment via the Kubernetes API. The Instances tab can start a managed port-forward and open the UI with the gateway token. |
| OpenShift | deploy-openshift.md | Extends Kubernetes with OAuth proxy sidecar, Route, and ServiceAccount. |
| Local (podman / docker) | deploy-local.md | Pulls the image, provisions your agent, starts a container on localhost. Works on macOS and Linux. |
These are installer provider plugins, not OpenClaw runtime plugins.
They extend the installer with additional deployment targets such as OpenShift or other platform-specific deployers. This is not a ClawHub-style plugin marketplace, and the installer does not install arbitrary upstream OpenClaw plugins for you.
This repo supports two plugin paths:
- In-repo installer provider plugins in
provider-plugins/ - External plugins installed as npm packages and listed in
~/.openclaw/installer/plugins.json
In-repo installer provider plugins are loaded automatically at startup -- no extra install steps needed.
| Plugin | Directory | Description |
|---|---|---|
| OpenShift | provider-plugins/openshift/ |
OAuth proxy, Routes, and ServiceAccounts for OpenShift clusters. Auto-detected when logged into an OpenShift cluster (oc login). |
To deploy on OpenShift, just log in with oc login before starting the installer. The OpenShift option will appear automatically in the deploy form.
Anything under provider-plugins/<name>/src/index.ts is discovered by the server at startup. That is how the OpenShift plugin is activated in this repo.
This is the preferred model for provider-specific deployers that ship with the main repository.
Third-party installer provider plugins can also be installed as npm packages. The loader discovers:
- unscoped packages named
openclaw-installer-* - scoped packages whose package name starts with
openclaw-installer-
Examples:
openclaw-installer-aws@acme/openclaw-installer-gke
You can activate external installer provider plugins by writing ~/.openclaw/installer/plugins.json directly, or by using run.sh:
./run.sh --plugin @acme/openclaw-installer-aws
./run.sh --plugins @acme/openclaw-installer-aws,@acme/openclaw-installer-gke
OPENCLAW_INSTALLER_PLUGINS=@acme/openclaw-installer-aws ./run.shrun.sh writes the requested package list to ~/.openclaw/installer/plugins.json, which is then consumed by the server plugin loader on startup.
These packages must implement the installer plugin register() contract and register deployers with the installer. Pointing this at a random OpenClaw plugin or ClawHub package will not work unless that package was specifically built as an installer provider plugin for openclaw-installer.
For this repo, the clean split is:
- ship first-party installer provider plugins under
provider-plugins/ - use external npm packages for optional or third-party installer provider plugins
That keeps the installer startup generic. Users start the same installer, and the available deployers come from the loaded plugins.
See ADR 0001 for the plugin system design.
| Provider | Default Model | What you need |
|---|---|---|
| Anthropic | claude-sonnet-4-6 |
ANTHROPIC_API_KEY |
| OpenAI | openai/gpt-5 |
OPENAI_API_KEY |
| OpenAI Codex | openai-codex/gpt-5.4 |
Codex CLI OAuth at ~/.codex/auth.json |
| Vertex AI (Gemini) | google-vertex/gemini-2.5-pro |
GCP service account JSON |
| Self-hosted (vLLM, etc.) | openai/default |
MODEL_ENDPOINT URL |
For Vertex AI, upload your GCP service account JSON file (or provide an absolute path). The installer extracts the project_id automatically.
For OpenAI Codex, run Codex CLI login on the installer host first, then select OpenAI Codex in the installer. The installer imports the OAuth tokens into OpenClaw as openai-codex:default; advanced profile rotation or alternate profile IDs should be managed later with the OpenClaw CLI or agent interaction.
The installer supports OpenClaw's ssh sandbox backend for local and Kubernetes deployments.
For the installer-specific setup, credential handling, and troubleshooting, see SANDBOX.md.
For upstream sandbox concepts and backend behavior, see the OpenClaw sandboxing docs.
Agent Source Directory can now point at a bundled multi-agent demo tree.
Try:
demos/openclaw-builder-research-opsdemos/software-qa-mcp
This demo includes:
workspace-main/for the orchestrator agentworkspace-builder/workspace-research/workspace-ops/openclaw-agents.jsonto register extra named agents and simple per-agent sandbox tool policies
workspace-main/ is applied to the computed main agent workspace for the current deploy.
Other workspace-* directories are copied through as named agent workspaces and can be
registered as additional agents through openclaw-agents.json.
The software-qa-mcp demo includes:
mcp.jsonfor the Context7 MCP serverexec-approvals.jsonfor baseline tool approval policyworkspace-main/with a software Q&A agent persona
Environment templates are included too:
.env.examplefor a generic installer setupdemos/openclaw-builder-research-ops/.env.examplefor the bundled sandbox demo
The installer supports provisioning MCP servers through the Agent Source Directory. Place a mcp.json file in your agent source directory:
{
"mcpServers": {
"my-server": {
"url": "https://mcp.example.com/mcp"
}
}
}The installer merges these into the generated openclaw.json at deploy time.
For tool approval policies, add an exec-approvals.json:
{
"version": 1,
"defaults": {
"security": "allowlist",
"ask": "on-miss",
"askFallback": "deny"
}
}This file is copied directly to ~/.openclaw/exec-approvals.json in the deployed instance.
See demos/software-qa-mcp for a complete example.
After the first deploy, agent files live under ~/.openclaw/workspace-* on the host. Edit those files locally, then:
- for Local deployments, stop and start the instance
- for Kubernetes/OpenShift deployments, use Re-deploy
The installer treats the host files as the source of truth and pushes them into the running instance.
For Local deployments, the default is an isolated container data volume for /home/node/.openclaw.
That keeps runtime state, config, pairing data, cron state, and plugin state out of the host
~/.openclaw tree while still syncing host workspaces into the instance on start/redeploy.
| Endpoint | Method | Purpose |
|---|---|---|
/api/health |
GET | Runtime detection, version, server defaults |
/api/deploy |
POST | Start a deployment (streams logs via WebSocket) |
/api/configs |
GET | List saved instance configs |
/api/instances |
GET | List all discovered instances |
/api/instances/:name/start |
POST | Start a stopped instance |
/api/instances/:name/stop |
POST | Stop and remove container (volume preserved) |
/api/instances/:name/redeploy |
POST | Update agent ConfigMap and restart pod (K8s only) |
/api/instances/:name/token |
GET | Get the gateway auth token |
/api/instances/:name/open |
POST | Start or reuse a managed K8s port-forward and return a localhost URL |
/api/instances/:name/command |
GET | Get the run command |
/api/instances/:name/data |
DELETE | Delete the data volume |
/ws |
WebSocket | Subscribe to deploy logs |
- Local deployer (podman + docker, macOS + Linux)
- Kubernetes deployer
- Vertex AI support (Google Gemini via GCP SA JSON)
- Instance discovery and lifecycle management
- Agent provisioning with full workspace files
- Custom agent/skill provisioning from host directory
- Deploy config persistence for re-deploy
- One-way host-to-instance workspace sync on Local Start / K8s Re-deploy
- Subagent provisioning
- Cron job provisioning from JOB.md files
- Pull running changes back to local files
- GitOps-backed workspace sync
- Skill import from git repos
- SSH deployer (remote host)