Skip to content

feat(renderer): SDF disk cache (closes #22) + npm packaging#50

Merged
proggeramlug merged 6 commits into
mainfrom
feat/sdf-disk-cache-22
May 16, 2026
Merged

feat(renderer): SDF disk cache (closes #22) + npm packaging#50
proggeramlug merged 6 commits into
mainfrom
feat/sdf-disk-cache-22

Conversation

@proggeramlug
Copy link
Copy Markdown
Contributor

@proggeramlug proggeramlug commented May 1, 2026

Summary

Two threads of work landed on this branch:

SDF disk cache (eadf596, closes #22)

On-disk cache for per-mesh SDFs, so cold restarts skip re-baking.

npm packaging (5 follow-up commits)

Prep the engine for npm install @bloomengine/engine:

  • `7edab87` — rename to `@bloomengine/engine`, add `files:` allowlist, `.npmignore`, `prepack` guard, root `LICENSE`
  • `2dd358c` — fix `repository`/`bugs`/`homepage` URLs to match `Bloom-Engine/engine`
  • `04a3678` — README install instructions; rewrite all documented imports from `bloom` to `@bloomengine/engine`
  • `e3aef89` — wire `publish-npm` job into existing `release.yml` (runs after `github-release`, idempotent skip-if-already-published)
  • `ed90787` — drop `NPM_TOKEN`, use npm trusted publishing via OIDC (provenance attestation automatic)

JoltPhysics sources are vendored into the tarball at publish time (via `prepack` script enforcing submodule init) rather than fetched at install — keeps installs self-contained, reproducible, and free of `postinstall` script risk. Tarball is ~2.0 MB packed / 8.1 MB unpacked.

Test plan

  • `npm pack --dry-run` produces a clean tarball (no `target/`, no Jolt extras, no `.perry-cache/`)
  • `@bloomengine/engine@0.3.1` already published manually (Skelpo's npm scope) — proves the package itself is sound
  • After merge: cut `v0.3.2` to exercise the new CI publish path end-to-end (test gate → GitHub Release → OIDC trusted publish)

Cold launches re-baked every per-mesh 32³ R32Float SDF from scratch
even though the same content always produces the same voxel data.
Sponza's first 9 frames spent ~8 bakes/frame on this path; second
launch spent another 9.

This change content-hashes (positions + indices) at GPU upload time,
checks a platform-appropriate cache directory, and `queue.write_texture`s
the cached voxel bytes directly when the file exists — bypassing the
GPU dispatch entirely. Misses fall through to the existing bake; the
renderer encodes a copy_texture_to_buffer alongside each dispatch
and persists the readback to disk after the frame's main submit.
The next launch hits and skips the bake.

The cache is best-effort throughout: a corrupt entry, missing dir,
or write failure silently re-bakes. wasm32 has no filesystem path so
load returns None and store is gated out — web builds bake every
launch as before.

Cache layout:
- macOS / iOS / tvOS / watchOS: ~/Library/Caches/bloom/sdf
- Linux / Android:              $XDG_CACHE_HOME/bloom/sdf
- Windows:                      %LOCALAPPDATA%\bloom\cache\sdf
- 16 B header (magic + version + voxel_res) + 128 KB R32Float payload

Sponza disk footprint: 68 × 128 KB = 8.7 MB (matches the issue's
budget). Disk reads happen synchronously at upload — a 128 KB read
from local cache is sub-millisecond.

The synchronous device.poll(Wait) on flush blocks for the bake
submission to finish before persisting; this is a cold-launch-only
stall (~9 frames) and the bake itself is the bottleneck on those
frames anyway. Async pipelining is a follow-up if the cold-launch
stall ever shows up in profiles.

8 new unit tests cover hash stability, change-detection on positions
and indices, count-vs-value distinguishability, store/load round-trip,
miss handling, size validation, and bad-magic rejection. cargo test
74/0 (was 66/0) on macOS, wasm32 cargo check clean.
Rename from bloom to @bloomengine/engine (Perry FFI module ref
updated to match) and add a files: allowlist so the tarball ships
just TS sources, Rust crates, shaders/assets, the bloom_jolt shim,
and JoltPhysics/Jolt — vendored at publish time rather than fetched
at install so installs stay self-contained and reproducible.

.npmignore is belt-and-suspenders against target/, pkg/, build/,
and the multi-MB Jolt extras (Samples/Docs/UnitTests/Assets/…).
scripts/prepack.sh refuses to publish if the Jolt submodule isn't
initialised — deliberately not auto-initing so we don't silently
publish stale refs. Root MIT LICENSE added so the legal terms
travel with the package.
Now that the package is published to npm, swap every documented
import from "bloom" to "@bloomengine/engine" so the snippets actually
resolve against an installed package. README also gains an Install
section up front pointing at npm (plus bun/pnpm/yarn equivalents) and
the toolchain prereqs (Perry + Rust, wasm-pack for web).
Adds a publish-npm job to the existing tag-driven release workflow.
Runs after github-release so a failed publish doesn't leave a
release-but-no-package state, and after the await-tests gate so we
never ship a tag that didn't pass CI.

Idempotent: re-checks npm before publishing and skips cleanly if the
version already exists (so workflow_dispatch on an old tag won't
double-publish). Checks out submodules recursively because the
prepack hook refuses to ship without JoltPhysics sources on disk,
and uses --provenance for the npm attestation badge.

Requires an NPM_TOKEN repo secret with publish rights on the
@bloomengine scope.
The package is now configured on npmjs.com with this workflow as a
trusted publisher, so `id-token: write` is sufficient — npm publish
exchanges the GitHub OIDC token for a short-lived credential. No
long-lived NPM_TOKEN secret to rotate or leak, and provenance
attestation is automatic.
@proggeramlug proggeramlug changed the title feat(renderer): on-disk cache for per-mesh SDFs (#22) feat(renderer): SDF disk cache (closes #22) + npm packaging May 16, 2026
@proggeramlug proggeramlug merged commit 068511e into main May 16, 2026
8 checks passed
@proggeramlug proggeramlug deleted the feat/sdf-disk-cache-22 branch May 16, 2026 09:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Lumen follow-up: disk cache for per-mesh SDFs

1 participant