Skip to content

sentientwave/automataos

AutomataOS

Version NixOS k3s License CI

A NixOS-based operating system image and guided installer for SentientWave Automata deployments on on-premises and edge hardware.

Overview

AutomataOS combines:

  • k3s single-node cluster for container orchestration
  • Automata deployment from the pinned v0.2.12-ce Automata source tree
  • Guided installer ISO for bare-metal deployment to SATA/NVMe disks
  • Preinstalled appliance images for QEMU, USB, and lab validation
  • First-boot deployment wiring for admin password, company/community name, timezone, and optional domains

Build Support

Supported build targets:

  • x86_64-linux
  • aarch64-linux

Supported host architectures:

  • x86_64 / amd64
  • aarch64 / arm64

Supported build paths:

  • Native Nix builds for host-native workflows
  • Docker builds for portable, containerized builds
  • Podman builds for portable, containerized builds

Supported output packages:

  • automata-image for the existing VM runner bundle
  • automata-iso for the guided installer ISO
  • automata-qcow-efi for a preinstalled QEMU-ready qcow2 appliance image
  • automata-raw-efi for a preinstalled raw EFI appliance image you can flash to USB storage

make build and ./scripts/build.sh choose a default Linux target from the host CPU architecture:

  • x86_64 / amd64 hosts default to x86_64-linux
  • aarch64 / arm64 hosts default to aarch64-linux

If your host does not have a local Nix setup that can build the selected Linux target, use the Docker or Podman flow instead.

Project Structure

automataos/
├── flake.nix             # Nix flake entry point
├── flake.lock            # Locked dependencies, including Automata tag pin
├── Containerfile.build   # Containerized builder for Docker and Podman
├── Makefile              # Build, run, and maintenance commands
├── modules/
│   ├── automata.nix      # Core AutomataOS settings + deployment module
│   ├── installer-media.nix # Guided installer ISO configuration
│   └── qemu-guest.nix    # Appliance image guest support and image import
├── scripts/
│   ├── build.sh          # Native build helper with target selection
│   ├── install-automataos.sh # Guided TUI installer entrypoint
│   └── run-vm.sh         # Run a built VM with host CPU settings
└── README.md

Quick Start

Prerequisites

  • Nix with flakes enabled for native builds
  • Docker or Podman for containerized builds
  • QEMU for VM testing
  • 4 GB+ RAM recommended
  • 20 GB+ disk space recommended

Native Builds

Build the host-native target:

nix build

Build an explicit target:

nix build .#packages.x86_64-linux.automata-image
nix build .#packages.aarch64-linux.automata-image
nix build .#packages.x86_64-linux.automata-iso
nix build .#packages.x86_64-linux.automata-qcow-efi
nix build .#packages.aarch64-linux.automata-raw-efi

Use the helper script:

./scripts/build.sh
TARGET_SYSTEM=x86_64-linux ./scripts/build.sh
TARGET_SYSTEM=aarch64-linux ./scripts/build.sh
TARGET_PACKAGE=automata-iso ./scripts/build.sh
TARGET_PACKAGE=automata-qcow-efi ./scripts/build.sh
TARGET_PACKAGE=automata-raw-efi ./scripts/build.sh

Containerized Builds

Docker and Podman run the same flake build inside nixos/nix:latest and map each target to the matching container platform:

  • x86_64-linux -> linux/amd64
  • aarch64-linux -> linux/arm64

make build-docker and make build-podman validate the selected flake output inside the builder image. They do not create a host-side ./result symlink by themselves.

Build with Docker:

make build-docker TARGET_SYSTEM=x86_64-linux
make build-docker TARGET_SYSTEM=aarch64-linux

Build with Podman:

make build-podman TARGET_SYSTEM=x86_64-linux
make build-podman TARGET_SYSTEM=aarch64-linux
make build-podman TARGET_SYSTEM=x86_64-linux TARGET_PACKAGE=automata-iso
make build-podman TARGET_SYSTEM=x86_64-linux TARGET_PACKAGE=automata-qcow-efi
make build-podman TARGET_SYSTEM=aarch64-linux TARGET_PACKAGE=automata-raw-efi

Export a real disk image from the Podman build into dist/:

make export-podman-iso TARGET_SYSTEM=x86_64-linux
make export-podman-iso TARGET_SYSTEM=aarch64-linux
make export-podman-qcow-efi TARGET_SYSTEM=x86_64-linux
make export-podman-qcow-efi TARGET_SYSTEM=aarch64-linux
make export-podman-raw-efi TARGET_SYSTEM=x86_64-linux
make export-podman-raw-efi TARGET_SYSTEM=aarch64-linux

Run the raw container commands directly:

docker build --platform linux/amd64 --build-arg TARGET_SYSTEM=x86_64-linux --build-arg TARGET_PACKAGE=automata-qcow-efi -f Containerfile.build .
docker build --platform linux/arm64 --build-arg TARGET_SYSTEM=aarch64-linux --build-arg TARGET_PACKAGE=automata-raw-efi -f Containerfile.build .

podman build --platform linux/amd64 --build-arg TARGET_SYSTEM=x86_64-linux --build-arg TARGET_PACKAGE=automata-qcow-efi -f Containerfile.build .
podman build --platform linux/arm64 --build-arg TARGET_SYSTEM=aarch64-linux --build-arg TARGET_PACKAGE=automata-raw-efi -f Containerfile.build .

Makefile Commands

make help                          # Show all available commands
make build                         # Build host-native Linux target
make build-x86_64                  # Build x86_64-linux image
make build-aarch64                 # Build aarch64-linux image
make build-iso                     # Build the guided installer ISO
make build-qcow-efi                # Build a preinstalled QEMU-ready qcow2 appliance image
make build-raw-efi                 # Build a preinstalled raw EFI appliance image for USB flashing
make build-docker                  # Build in Docker
make build-podman                  # Build in Podman
make export-podman                 # Copy the selected Podman-built artifact to dist/
make export-podman-iso             # Build/export an ISO image to dist/
make export-podman-qcow-efi        # Build/export a qcow2 image to dist/
make export-podman-raw-efi         # Build/export a raw EFI image to dist/
make dry-run                       # Dry-run the selected build target
make run                           # Run VM with default settings
make run-host                      # Run VM with host CPU (recommended)
make run-test                      # Run VM with temporary disk
make stop                          # Stop running VM
make clean                         # Remove built artifacts
make clean-disk                    # Remove disk images
make format                        # Format Nix files
make show                          # Show flake outputs
make update                        # Update flake inputs
make shell                         # Enter development shell

Useful variables:

TARGET_SYSTEM=x86_64-linux
TARGET_PACKAGE=automata-iso
CONTAINER_RUNTIME=docker
CONTAINER_PLATFORM=linux/amd64
BUILD_CONTAINER_FILE=Containerfile.build
BUILD_IMAGE_TAG=automataos-build:x86_64-linux-automata-iso-v0.1.1

Installer and Defaults

automata-iso now boots into a guided text installer that:

  • suppresses normal boot logs and hands off directly to the installer TUI
  • does not expose a shell, serial login, or alternate VT escape path
  • wipes one selected SATA/NVMe disk
  • creates a UEFI GPT layout with EFI + ext4 root
  • installs SentientWave AutomataOS to the target disk
  • captures timezone, company/community name, admin UI password, Matrix admin password, and optional domains

Installed defaults:

  • local admin user: automata
  • local admin password: automata
  • passwordless sudo for automata
  • admin UI direct-IP fallback: http://<node-ip>:32080
  • Matrix direct-IP NodePort: 32008

If admin or Matrix domains are left blank, the system does not fall back to nip.io.

If the installer is canceled or fails, the ISO stays locked down and offers only Reboot and Power off.

Running the Appliance VM

After a native nix build, or after producing a compatible local result symlink, run the VM with:

nix run .#run-vm
nix run .#run-vm-host
./result/bin/run-nixos-vm

The helper script still works for a host-local result:

./scripts/run-vm.sh
./scripts/run-vm.sh -cpu host -smp 4
./scripts/run-vm.sh -nographic

Use the exported disk images directly:

make run-qemu TARGET_SYSTEM=aarch64-linux
make run-qemu TARGET_SYSTEM=x86_64-linux
make run-qemu-iso TARGET_SYSTEM=aarch64-linux
make run-qemu-iso TARGET_SYSTEM=x86_64-linux
make run-qemu-disk TARGET_SYSTEM=aarch64-linux
make run-qemu-disk TARGET_SYSTEM=x86_64-linux QEMU_IMAGE=/absolute/path/to/custom-image.qcow2
make test-qemu TARGET_SYSTEM=aarch64-linux
make test-qemu TARGET_SYSTEM=x86_64-linux
make test-qemu TARGET_SYSTEM=x86_64-linux QEMU_IMAGE=/absolute/path/to/custom-image.qcow2

The host-QEMU targets auto-select a proven bootable artifact per architecture:

  • aarch64-linux defaults to the exported automata-raw-efi image and uses the host qemu-system-aarch64 binary with EFI firmware.
  • x86_64-linux defaults to the exported automata-iso installer media and uses the host qemu-system-x86_64 binary.
  • Installer ISO boots automatically attach a writable target disk at vm-disk/installer-<target>.qcow2; override it with QEMU_INSTALL_DISK=/absolute/path/to/disk.qcow2 or resize it with QEMU_INSTALL_DISK_SIZE=80G.
  • Installer ISO boots require a graphical QEMU window, so make run-qemu ... QEMU_RUN_PACKAGE=automata-iso defaults to QEMU_HEADLESS=0.
  • Disk boots default to -snapshot so downloaded artifacts stay unchanged while you test them.
  • Set QEMU_IMAGE=/absolute/path/to/image.qcow2 to boot a specific local qcow2 file instead of auto-discovering an exported artifact.
  • Exported .img and .qcow2 files are made owner-writable so UTM and other hypervisors can attach them as writable disks. ISO files remain read-only.

You can still invoke QEMU manually if you prefer:

qemu-system-x86_64 -cdrom dist/x86_64-linux/automata-iso-v0.1.1/*.iso
qemu-system-x86_64 -drive if=pflash,format=raw,readonly=on,file=/opt/homebrew/share/qemu/edk2-x86_64-code.fd -snapshot -drive file=dist/x86_64-linux/automata-qcow-efi-v0.1.1/*.qcow2,if=virtio,format=qcow2
qemu-system-aarch64 -bios /opt/homebrew/share/qemu/edk2-aarch64-code.fd -snapshot -drive file=dist/aarch64-linux/automata-raw-efi-v0.1.1/*.img,if=virtio,format=raw
sudo dd if=dist/x86_64-linux/automata-iso-v0.1.1/*.iso of=/dev/<usb-device> bs=16M status=progress conv=fsync
sudo dd if=dist/aarch64-linux/automata-raw-efi-v0.1.1/*.img of=/dev/<usb-device> bs=16M status=progress conv=fsync

Deployment Flow

  1. System boots and starts k3s.
  2. automata-deploy waits for the cluster to be ready.
  3. The deployment creates the required namespaces and loads any bundled local Automata image.
  4. The Automata Helm chart is deployed from /etc/automata/source/deploy/k8s using the installer-provided company name, passwords, and optional domains.
  5. The admin service is patched to a fixed direct-IP NodePort, and custom admin domains are routed through the built-in Traefik ingress.
  6. automata-import imports /var/lib/automata/automata-local.tar when present.

Accessing Automata

After the system boots and the deployment completes:

http://<node-ip>:32080

If you supplied a custom admin UI domain during install, use http://<your-domain> unless you put external TLS termination in front of the node.

Find the node IP from inside the system:

hostname -I

Read the configured admin and Matrix credentials, or inspect deployment failures:

cat /var/log/automata-deploy.log

Configuration

For installer-based systems, the saved settings live in /etc/nixos/automataos/generated-settings.nix.

For image builds, edit modules/automata.nix to customize the shared AutomataOS defaults.

Load a custom Automata image into the VM:

scp automata-local.tar root@<vm-ip>:/var/lib/automata/
k3s ctr images import /var/lib/automata/automata-local.tar

Nix Flake Commands

nix flake show
nix flake update
nix develop
nix fmt
nix build .#packages.x86_64-linux.automata-image
nix build .#packages.aarch64-linux.automata-image
nix build .#packages.x86_64-linux.automata-iso
nix build .#packages.x86_64-linux.automata-qcow-efi
nix build .#packages.aarch64-linux.automata-raw-efi

Troubleshooting

Native Build Fails on a Non-Linux Host

Use the containerized builder:

make build-docker TARGET_SYSTEM=aarch64-linux
make build-podman TARGET_SYSTEM=x86_64-linux

VM Won't Boot

ls -la vm-disk/nixos.qcow2
rm -rf vm-disk
./scripts/run-vm.sh

k3s Won't Start

systemctl status k3s
journalctl -u k3s -f

Deployment Fails

journalctl -u automata-deploy -f
helm upgrade --install automata /etc/automata/source/deploy/k8s --namespace automata --create-namespace

Links

About

Automata Operating System

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors