Collects upstream artifacts from public sources on a connected machine, packages them into a single archive, and provides an installer for the other side of the boundary.
The bundler has two jobs — build and install — that run on different machines at different times.
The build machine pulls everything declared in a profile, packages it into a .tar.gz, and uploads to S3. The destination URL is baked into the bundle before any artifact is pulled, so the archive is self-documenting and verifiable from both sides of the boundary. The install machine downloads the archive, verifies its checksum, and runs install.sh.
Four artifact types — images, RPMs, OCI artifacts, and static tools — are packaged alongside the installer and provenance files.
AL2023 doesn't ship critical container tools like skopeo in its default repositories. The bundler solves this by including static binaries:
tools/skopeo # 38 MB static binary for container operations
The installer detects missing system tools and auto-adds bundled alternatives to PATH. This ensures complete functionality in airgapped environments without requiring custom repos or complex dependencies.
Everything in a bundle is declared in a TOML file. We have one: profiles/hncd.toml.
If you need to add or update something — new image version, new RPM, Spectro pack — this is the only file you edit.
[[images]]
image = "hashicorp/packer:1.14.2" # pull source
archive = "packer-1.14.2-base.tar" # filename inside bundle
dest_repo = "packer" # target repo in the registry[[rpms]]
url = "https://rpm.releases.hashicorp.com/.../packer-1.14.2-1.x86_64.rpm"
filename = "packer-1.14.2-1.x86_64.rpm"
sha256 = "" # fill in for integrity verification (recommended)[[tools]]
name = "skopeo"
url = "https://github.com/lework/skopeo-binary/releases/download/v1.17.0/skopeo-linux-amd64"
filename = "skopeo"
sha256 = "e31bc3c68909b51112fd91beec651acaddc1ad0d53806aa0e791d0a9ebc4293a"Static tools are platform-specific binaries bundled for use in environments lacking package repositories.
[[oci_artifacts]]
ref = "ghcr.io/spectrocloud/packs/nginx:1.25.0"
archive = "spectro-pack-nginx-1.25.0.tar"
dest_repo = "spectro/packs/nginx"
kind = "pack" # optional label: helm, pack, zarfOCI artifacts are pulled via skopeo and pushed to the target registry at install time alongside container images — no extra steps.
Build machine (connected side):
| Tool | Purpose | Required? |
|---|---|---|
| Python 3.8+ | Runs the bundler | Yes |
| skopeo | Pulls and saves images / OCI artifacts | Strongly recommended |
| docker | Image fallback if skopeo is absent | Fallback only |
| curl | Downloads RPMs | Yes |
| aws cli | S3 upload | Only with --upload |
| terraform | Provider mirroring | Only if using TF providers |
Python < 3.11 also needs:
pip install tomli
Target machine (airgapped side):
| Tool | Purpose |
|---|---|
| bash | Runs install.sh |
| skopeo | Loads and pushes images / OCI artifacts |
| docker | Image load only (fallback; requires docker-archive format) |
| sha256sum | Checksum verification |
| dnf | RPM installation |
| aws cli | Registry auth when REGISTRY_AUTH=ecr |
# Check profile syntax — no pulling, no network
python3 build.py validate profiles/hncd.toml
# Build
python3 build.py build profiles/hncd.toml
# Build and upload to S3
python3 build.py build profiles/hncd.toml --upload --bucket my-transfer-bucketOutput lands in ./output/:
output/hncd-20260310-144018.tar.gz # the bundle
output/bundle.env # CI dotenv artifact (see GitLab CI section)
The .sha256 sidecar is created when uploading to S3. bundle.env contains the pre-calculated S3 URLs and checksum:
BUNDLE_NAME=hncd-20260310-144018.tar.gz
BUNDLE_URL=s3://hncd-artifacts/airgap-bundles/hncd/hncd-20260310-144018.tar.gz
BUNDLE_SIDECAR_URL=s3://hncd-artifacts/airgap-bundles/hncd/hncd-20260310-144018.tar.gz.sha256
BUNDLE_SHA256=9c645978303a60226a91ff4e96c383a121d556b8404da8d85f11c1012bbca4f5bundle.env is also written on --dry-run so you can inspect would-be values without uploading.
tar xzf hncd-20260310-144018.tar.gz
cd hncd-20260310-144018/cp install.env.example install.env
vi install.envThe two values most environments need:
# ECR — auto-constructs the full registry URL
AWS_ACCOUNT_ID=123456789012
AWS_REGION=us-gov-west-1
# Any other registry
REGISTRY=registry.internal.example.com:5000
REGISTRY_AUTH=noneinstall.env sits next to install.sh and is read automatically on every run. Fill it in once. CLI flags always override it for one-off runs.
Do not commit
install.env. It's in.gitignore. Useinstall.env.exampleas the template.
sudo ./install.shThe script prints what it detected before doing anything:
Capabilities detected:
skopeo: true
docker (daemon): false
dnf: true
Configuration:
registry: 123456789012.dkr.ecr.us-gov-west-1.amazonaws.com
registry-auth: ecr
region: us-gov-west-1
Then works through each artifact type. If a required tool is absent it skips that section and prints the manual command. If something fails, it tells you exactly what to retry.
CLI flags override install.env:
sudo ./install.sh --skip-rpms
sudo ./install.sh --skip-rpms --skip-terraform --skip-oci
sudo ./install.sh --registry registry.other.example.comIf you've received a bundle from Recro and need to validate it in your environment before using it, see the handoff guide:
docs/bundle-install-test-handoff.md
It covers how to transfer the bundle into your S3, trigger the bundle-install-test
CI job with a single variable (BUNDLE_URL), interpret the pre-flight output, and
retrieve the log artifact from failed runs to send back for diagnosis.
The .gitlab-ci.yml automates the full build-transfer-install cycle.
The build job runs on an internet-connected runner, uploads the bundle to S3, and publishes bundle.env as a GitLab dotenv artifact. The install job runs inside the VPC — it reads $BUNDLE_URL and $BUNDLE_SHA256 directly from that artifact, so no URL needs to be copied between stages.
Both install and verify require a manual trigger by default. Set AUTO_INSTALL=true to trigger install automatically after build.
Runner requirements:
| Tag | Location | Needs |
|---|---|---|
| (none) | Anywhere | python3 |
connected |
Internet-routable | python3, skopeo, curl, aws cli |
airgapped |
Inside VPC | bash, skopeo, dnf, aws cli, passwordless sudo |
Passwordless sudo is required on the airgapped runner:
# /etc/sudoers.d/gitlab-runner
gitlab-runner ALL=(ALL) NOPASSWD: ALL
Required CI/CD variables (GitLab → Settings → CI/CD → Variables):
| Variable | Description |
|---|---|
AWS_ACCESS_KEY_ID |
Connected runner credentials |
AWS_SECRET_ACCESS_KEY |
Connected runner credentials |
AWS_ACCOUNT_ID |
12-digit account ID — used to construct ECR URL |
AWS_REGION |
e.g. us-gov-west-1 |
TRANSFER_BUCKET |
S3 bucket for bundle transfer |
TRANSFER_PREFIX |
S3 key prefix (default: airgap-bundles/hncd) |
Optional variables:
| Variable | Effect |
|---|---|
SKIP_IMAGES |
true to skip image pull |
SKIP_RPMS |
true to skip RPM download |
SKIP_OCI |
true to skip OCI artifact pull |
AUTO_INSTALL |
true to trigger install automatically after build |
The bundler delivers base images — alpine, packer, terraform, gitlab-runner. These are ingredients, not finished products.
Once they're in the registry, oci-image-builder takes over: it builds STIG-hardened HNCD images on top of those bases using vendored dependencies committed to that repo.
The bundler has no knowledge of what oci-image-builder does. Its job ends when the push completes.
Bundles land at the configured prefix with a checksum sidecar:
airgap-bundles/hncd/
hncd-20260310-144018.tar.gz
hncd-20260310-144018.tar.gz.sha256
The full S3 URL is pre-calculated at build start and written to meta/bundle-source.txt inside the archive. Both sides of the boundary can find it there without needing out-of-band coordination.
Configure in the profile [s3] section, or pass flags at build time:
[s3]
bucket = "my-transfer-bucket"
prefix = "airgap-bundles/hncd"
region = "us-gov-west-1"# Upload an already-built bundle to S3
python3 build.py upload output/hncd-20260306-120000.tar.gz --bucket my-bucket
# List bundles in S3
python3 build.py list --bucket my-bucket


