The onboarding and management server that ships inside every Holo Node.
It is a single Rust binary with zero external dependencies β no Tokio, no Axum, no serde. It serves a browser UI over plain TCP on port 8080 and handles the full lifecycle of a node: first-time setup, SSH key management, AI agent configuration, hardware mode switching, and binary self-updates pulled from this repository's GitHub Releases.
- How it fits into the system
- What it does
- Building locally
- Repository structure
- Shipping a release
- Self-update mechanism
- Switching OpenClaw forks
- Routes reference
- File paths on the node
- Security model
- Adding a new chat channel
- Contributing
holo-host/holo-node-iso holo-host/node-manager
β β
β Butane YAML + build scripts β source + release pipeline
β β
β ISO contains node-setup.sh, β GitHub Actions builds two
β a first-boot shell script β musl-static binaries on
β β every version tag
βΌ β
βββββββββββββββββββββββ βΌ
β Holo Node ISO β node-manager-x86_64
β β node-manager-aarch64
β node-setup.sh ββββββΌβββββββββββββββββββββββββββββββΊ downloaded at first boot
β (inlined script) β
β β
β node-manager β After first boot, the binary
β .service (systemd) β checks GitHub Releases hourly
β β and replaces itself in-place
βββββββββββββββββββββββ without needing a new ISO.
The binary is not baked into the ISO. Instead, the ISO contains node-setup.sh β a small bash script that runs once on first boot, downloads the appropriate binary from the latest GitHub Release here, and exits. From that point on, the binary self-updates hourly. No new ISO is required to deliver updates to running nodes.
On first boot node-setup.sh (part of the ISO) downloads this binary from the latest GitHub Release and installs it to /usr/local/bin/node-manager. Once installed, node-manager.service starts.
On startup the binary generates a random 12-character password, writes its SHA-256 hash to /etc/node-manager/auth, and displays the password and the node's local IP address on the HDMI-connected screen (/dev/tty1) in large coloured text. The node operator uses that information to open the setup UI in a browser.
A three-step browser UI walks the operator through:
- Node identity & SSH β node name (used as hostname slug) and optional SSH public key for the
holouser - Hardware mode β initial container mode (EdgeNode or Wind Tunnel)
- Review & initialize β summary before committing
After the operator submits, the server configures everything, starts the appropriate container service, and redirects the browser to the management panel.
After onboarding, GET / redirects to /manage. The panel (password-protected) lets the operator:
- Add and remove SSH public keys for the
holouser without physical access - Switch hardware mode between Standard EdgeNode and Wind Tunnel
- Change the node password
- Trigger an immediate software update check
A background thread wakes every hour, queries the GitHub Releases API for this repository, and compares the latest tag against the compiled-in VERSION constant. If a newer version exists it downloads the architecture-matched binary, atomically replaces the running binary on disk, and calls systemctl restart node-manager.service. The update check can also be triggered manually from the /manage panel.
- Rust stable (1.75 or newer)
- For the static musl builds that ship in releases:
musl-tools(apt install musl-tools) and the musl targets added to your toolchain
# Add musl targets (first time only)
rustup target add x86_64-unknown-linux-musl
rustup target add aarch64-unknown-linux-muslcargo build
./target/debug/node-manager
# Open http://localhost:8080# x86_64
cargo build --release --target x86_64-unknown-linux-musl
# aarch64 (requires aarch64-linux-gnu-gcc cross-compiler)
sudo apt install gcc-aarch64-linux-gnu
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER=aarch64-linux-gnu-gcc \
cargo build --release --target aarch64-unknown-linux-muslThe resulting binaries are fully static β no glibc, no external libraries. They run on any FCOS image regardless of what userland packages are present.
cargo run
# Visit http://localhost:8080
# On first run it will print the generated password to stderr since /dev/tty1
# won't exist on a dev machine.To simulate an already-onboarded node (skip to /manage):
mkdir -p /etc/node-manager
echo "onboarded=true\nnode_name=test\nhw_mode=STANDARD\nagent_enabled=false" \
> /etc/node-manager/state
cargo run
# GET / will redirect to /managenode-manager/
βββ src/
β βββ main.rs β entire server (single file, std-only)
βββ Cargo.toml
βββ Cargo.lock
βββ .github/
β βββ workflows/
β βββ release.yml β builds + publishes binaries on version tag
βββ README.md
The server is intentionally a single file with no dependencies so it can be audited easily and compiled without a network connection.
Every release publishes two binary assets:
| Asset name | Architecture |
|---|---|
node-manager-x86_64 |
x86-64 (most hardware) |
node-manager-aarch64 |
ARM64 (Raspberry Pi, Apple Silicon VMs) |
These asset names are load-bearing. Both the self-update code in find_asset_download_url() and the first-boot node-setup.sh in holo-node-iso search for them by exact name. Do not rename them.
-
Make your changes to
src/main.rs(and/orholo-node.md). -
Update the version in two places β they must match exactly:
const VERSION: &str = "5.1.0";insrc/main.rsversion = "5.1.0"inCargo.toml
-
Commit:
git add src/main.rs Cargo.toml git commit -m "release: v5.1.0 β <one line summary of changes>" -
Tag and push:
git tag v5.1.0 git push origin main git push origin v5.1.0
-
GitHub Actions (
.github/workflows/release.yml) picks up the tag, builds both binaries using musl static linking, creates a GitHub Release, and attaches both binary assets automatically. No manual upload needed. -
Running nodes pick up the update within 60 minutes. Operators can trigger it immediately from the
/managepanel's "Software Update" section.
Once a release is published, updates reach nodes in two ways:
- Running nodes β the hourly self-update check downloads the new binary and restarts the service automatically, within 60 minutes of the release being published.
- Freshly provisioned nodes β
node-setup.shalways downloads the latest release at first boot, so new nodes get the current version immediately with no ISO rebuild required.
The update logic lives in check_and_apply_update() and spawn_update_checker().
Flow:
- Thread sleeps 90 seconds after startup (lets the server stabilise)
- Queries
https://api.github.com/repos/{UPDATE_REPO}/releases/latest - Parses
tag_name, strips the leadingv, compares toVERSION - If newer: finds the asset named
node-manager-{uname -m}in the release JSON - Downloads to
/usr/local/bin/node-manager-update chmod +x, thenfs::rename(atomic on Linux)systemctl restart node-manager.service- Sleeps 1 hour, repeats
The UPDATE_REPO environment variable overrides the default (holo-host/node-manager). This is used in staging environments.
| Method | Path | Auth | Description |
|---|---|---|---|
GET |
/ |
β | Onboarding wizard (pre-onboard) or redirect to /manage |
POST |
/submit |
β | Run onboarding; returns JSON |
GET |
/login |
β | Login page |
POST |
/login |
β | Authenticate; sets session cookie |
POST |
/logout |
session | Clear session cookie |
GET |
/manage |
session | Management panel HTML |
GET |
/manage/status |
session | JSON node state snapshot |
POST |
/manage/ssh/add |
session | Add SSH public key |
POST |
/manage/ssh/remove |
session | Remove SSH key by index |
POST |
/manage/hardware |
session | Switch STANDARD β WIND_TUNNEL |
POST |
/manage/password |
session | Change node password |
POST |
/manage/update |
session | Trigger immediate update check |
Session tokens are stored in-memory and cleared on restart β operators will need to log in again after an update.
| Path | Contents | Permissions |
|---|---|---|
/etc/node-manager/state |
Key-value store of node state (node_name, hw_mode, agent_enabled, channel, provider, model) | 600 |
/etc/node-manager/auth |
Password hash: sha256:<salt>:<hash> |
600 |
/etc/node-manager/provider |
Provider credentials for agent re-enable | 600 |
/etc/containers/systemd/edgenode.container |
Podman Quadlet for the EdgeNode container | 644 |
/etc/containers/systemd/wind-tunnel.container |
Podman Quadlet for Wind Tunnel | 644 |
/home/holo/.ssh/authorized_keys |
SSH public keys for the holo user | 600 |
/var/lib/edgenode/ |
EdgeNode persistent data volume | β |
The server is protected by a single password (the "node password"). On first run a random 12-character password is generated (charset excludes ambiguous characters: 0, O, 1, l, I), hashed as sha256:<8-hex-salt>:<sha256(salt:password)>, and the hash is stored at /etc/node-manager/auth (chmod 600). The cleartext password is never stored β it is only displayed once on the HDMI screen and logged to the systemd journal.
Sessions are 24-hour cookie-based tokens stored in-memory. They are cleared on server restart. All /manage/* routes require an active session; unauthenticated requests are redirected to /login.
SSH access is provided only for the holo system user. Root login is disabled via /etc/ssh/sshd_config.d/90-holo.conf. Password authentication is disabled β SSH keys only. SSH is intended as a "break glass" access path, not the primary management interface. The /manage panel is the primary interface.
The server binds to 0.0.0.0:8080. It is intended to be reachable only on the local network β the FCOS firewall configuration in holo-node-iso should not expose port 8080 to the internet. The UI has no HTTPS; TLS termination (if desired) should be handled at the network edge.
This repository is intentionally kept simple. Before contributing, please read the design constraints:
- No async runtime. The server uses
std::threadfor concurrency. Each connection spawns a thread. This is appropriate for a UI that handles at most a handful of simultaneous requests. - No third-party crates.
stdonly. This keeps the binary small, the build reproducible, and the audit surface minimal. - Single file.
src/main.rscontains the entire server. This is a deliberate choice for auditability β an operator should be able to read the entire source in one sitting.
Pull requests that introduce dependencies or split the code across multiple files will not be accepted unless there is a very strong reason.
For bug reports or feature requests, open an issue. For security issues, contact security@holo.host directly.