Devcontainer Bridge is a dual-daemon tool that transparently forwards TCP ports, Unix sockets, and opens browser URLs between Linux devcontainers and the macOS/Linux host. It solves the gap left by the devcontainer CLI (vs VS Code) where container-bound ports are unreachable from the host, host Unix sockets are inaccessible from containers, and browser-opening requests fail in headless containers.
Primary use case: Tools like Atlassian MCP's OAuth flow that bind a random port, open the host browser, and expect the callback on localhost:PORT — all of which fail without this bridge.
Two daemons cooperate via a pair of TCP channels on loopback:
- Host daemon (
dbr host-daemon) — long-lived process on the host. Binds control and data ports, accepts registrations from containers, binds per-port listeners for forwarded ports, scans for Unix sockets to forward into containers, and opens URLs in the host browser. - Container daemon (
dbr container-daemon) — runs inside each devcontainer. Polls/proc/net/tcpfor new listeners, sends Forward/Unforward messages, creates mirror Unix sockets for host-side sockets, and handles reverse data connections for proxying.
All TCP connections are initiated container to host (reverse connection model). This is required because macOS Docker Desktop cannot route to container IPs — containers run in a Linux VM with no direct host-to-container networking.
| Channel | Default port | Default bind | Purpose |
|---|---|---|---|
| Control | 19285 |
auto-detected | JSON-line protocol for registration, Forward/Unforward, SocketForward/SocketUnforward/SocketConnectRequest, OpenUrl, Ping/Pong, ListRequest/ListResponse |
| Data | 19286 |
auto-detected | Reverse data connections from containers for TCP and Unix socket proxying |
Control and data ports use auto-detected bind addresses: 0.0.0.0 if Docker is running (so containers can reach the host via Docker Desktop's gateway IP), 127.0.0.1 otherwise. Configurable with --bind-addr or --no-docker-detect.
1. Client connects to host:PORT (forwarded port listener)
2. Host daemon sends ConnectRequest{port, conn_id} via control channel
3. Container daemon connects to localhost:PORT inside container
4. Container daemon opens NEW TCP connection to host data port (19286)
5. Container daemon sends ConnectReady{conn_id} on data connection
6. Host daemon matches conn_id, bridges client <-> data connection
7. Bidirectional copy via tokio::io::copy_bidirectional
8. When either side closes, both connections tear down
1. Host daemon scanner discovers Unix socket matching watch_paths glob
2. Host sends SocketForward{socket_id, host_path, container_path} to containers
3. Container creates mirror UnixListener at container_path (mode 0600)
4. Client in container connects to mirror socket
5. Container sends SocketConnectRequest{socket_id, conn_id} on control channel
6. Host connects to original Unix socket at host_path
7. Container opens reverse TCP to host data port, sends ConnectReady{conn_id}
8. Host bridges Unix socket <-> TCP data stream bidirectionally
- Reverse data connections — All connections flow container-to-host because macOS Docker Desktop cannot route to container IPs (containers live in a Linux VM). Same pattern as SSH
-Rreverse port forwarding. - TCP control channel (not Unix socket) — No Docker volume mount or
devcontainer.jsonmodification required. Works throughhost.docker.internalDNS. - Two ports (control + data) — Separates the framed JSON-line protocol from raw TCP byte streams cleanly. Control messages stay parseable; data connections switch to raw bytes after a single handshake line.
- Two-tier binding — Control and data ports use auto-detected bind addresses (
0.0.0.0if Docker is running,127.0.0.1otherwise), configurable via--bind-addror--no-docker-detect. Forwarded per-port listeners always bind to loopback ([::1]/127.0.0.1) only. The control protocol is hardened against untrusted clients (message limits, field validation, resource caps). - Single binary —
dbr host-daemonanddbr container-daemonare subcommands of the same binary.dbr-openis a hardlink forBROWSERenv var integration. The container daemon is auto-started via the devcontainer feature's entrypoint script.
src/
main.rs CLI entrypoint, clap dispatch, tracing init, dbr-open hardlink detection
dbr/entrypoint.sh Devcontainer feature entrypoint — starts container daemon on container boot
cli.rs Clap subcommand definitions (HostDaemon with --bind-addr/--no-docker-detect/--no-auth, ContainerDaemon, Status, Forward, Unforward, Open, Ensure)
protocol.rs All JSON-line message types (Register, Forward, ConnectRequest, SocketForward, SocketUnforward, SocketConnectRequest, OpenUrl, Ping/Pong, etc.)
control.rs TCP JSON-line framing (read_message/write_message), ControlListener, ControlConnection, connect()
config.rs Config struct, TOML file loading (~/.config/dbr/config.toml), env var layering (DCBRIDGE_HOST, DCBRIDGE_HOST_PORT), [socket_forwarding] section
auth.rs Token generation (64-char hex), persistence (~/.config/dbr/auth-token, 0600 perms), resolution (CLI flag > env var > file)
lib.rs Crate root — re-exports auth, config, container, control, host, protocol modules
container/
mod.rs Container daemon main loop: host resolution, Register (with auth_token), scan/Forward/Unforward cycle, reconnection with exponential backoff, signal handling, parent PID monitoring
scanner.rs /proc/net/tcp + /proc/net/tcp6 parser (hex port extraction, LISTEN state filtering, inode-to-process resolution)
filter.rs Port filtering: --exclude-ports, --include-ports, --exclude-process regex, forwardPorts from devcontainer.json
browser.rs `dbr open` client: validates URL (http/https, 2048 char cap), connects to host, sends OpenUrl, waits for OpenUrlAck
data.rs Reverse data connection handler: on ConnectRequest, connects to local port + opens data connection to host + sends ConnectReady + bridges bidirectionally
socket.rs Mirror socket creation/cleanup, UnixListener accept loop, SocketConnectRequest dispatch + reverse data bridge for socket connections
host/
mod.rs Host daemon main loop: control listener, data listener, container state management, Forward/Unforward/OpenUrl/SocketForward/SocketUnforward/SocketConnectRequest handling, heartbeat/keepalive (30s Ping, 3 missed Pongs = disconnect), connection draining, multi-container port conflict resolution, exit-on-idle, auth token validation
listener.rs Per-port TCP listener: binds [::1] with 127.0.0.1 fallback, accepts client connections, shutdown via watch channel
proxy.rs PendingConnections map (conn_id -> oneshot<TcpStream>), register/resolve/cancel pending, bridge_connection with 10s timeout, bidirectional copy
browser.rs BrowserOpener: URL validation, localhost port rewriting (container port -> host port), rate limiting (5/sec), open via `open` (macOS) / `xdg-open` (Linux)
ensure.rs `dbr ensure` logic: Ping/Pong health check, spawn background daemon if not running, PID file management, port conflict detection with actionable error
socket_scanner.rs Glob-based Unix socket discovery (watch_paths patterns), lifecycle tracking (appear/disappear), container path rewriting, symlink_metadata (no symlink following)
tests/
integration/
main.rs Test harness entry
forwarding.rs 13 integration tests: register/forward/unforward lifecycle, cleanup on disconnect, Ping/Pong, ListRequest/ListResponse, multi-container port conflict, full reverse proxy pipeline, data handshake, reconnection, ConnectFailed handling, bridge timeout
e2e/
Dockerfile Minimal Alpine 3.21 image (python3, bash) for self-contained e2e tests
docker-compose.yml Compose definition (project: dbr-e2e, service: dbr-test-app) with host.docker.internal mapping
- Zero
unsafeblocks — no exceptions. - No
unwrap()/expect()in production paths — usethiserrorerror types with?propagation throughout.unwrap()is acceptable only in tests. - All public APIs have
///doc comments including# Errorssections cargo clippy -- -D warningsmust passcargo fmtmust pass- Two-tier binding model — control and data ports use auto-detected bind addresses (
0.0.0.0if Docker is running,127.0.0.1otherwise), configurable via--bind-addror--no-docker-detect. Forwarded per-port listeners (bind_loopback) must always bind to127.0.0.1or[::1], never0.0.0.0or[::]. - Structured logging via
tracingcrate —infofor lifecycle events (register, forward, disconnect),debugfor per-connection events,warnfor recoverable errors,errorfor fatal errors - Error types — each module has its own
thiserrorerror enum. Use#[from]for transparent wrapping,#[source]for explicit chaining.
| Flag | Applies to | Description |
|---|---|---|
--no-auth |
host-daemon |
Disable token authentication (accept any Register) |
--auth-token <TOKEN> |
container-daemon, open, status, forward, unforward |
Provide auth token directly |
--auth-token-file <PATH> |
container-daemon, open, status, forward, unforward |
Read auth token from file |
Token resolution order: --auth-token flag > DCBRIDGE_AUTH_TOKEN env var > --auth-token-file flag > default file (~/.config/dbr/auth-token).
In ~/.config/dbr/config.toml:
[socket_forwarding]
watch_paths = ["/tmp/*.sock", "/run/user/1000/**/*.sock"] # Glob patterns for socket discovery
scan_interval_ms = 5000 # How often to scan in milliseconds (default: 5000)
max_socket_forwards = 16 # Maximum number of sockets to forward (default: 16)
container_path_prefix = "/tmp" # Base directory for mirror sockets in containersSetting watch_paths to a non-empty list auto-enables socket forwarding. When watch_paths is empty (the default), no Unix sockets are forwarded.
cargo build # debug build
cargo build --release # optimized release buildcargo test # all unit + integration tests
cargo test -- --nocapture # with stdout/stderr outputcargo clippy -- -D warnings
cargo fmt --checkOn Apple Silicon macOS, cross does not work reliably for aarch64-unknown-linux-musl. Use the Docker approach instead:
# Build a Linux aarch64 binary using Docker (works on Apple Silicon)
docker run --rm -v "$(pwd)":/src -w /src rust:1.93-alpine \
sh -c 'apk add musl-dev && cargo build --release'This produces a Linux binary at target/release/dbr. Note that this overwrites the macOS host binary — rebuild with cargo build --release if you need the host binary again.
# Find your devcontainer
docker ps --format '{{.Names}}' | grep devcontainer | grep app
# Copy the Linux binary in
docker cp ./target/release/dbr <container>:/usr/local/bin/dbr
# Start the container daemon (normally auto-started by the devcontainer
# feature's entrypoint, but manual start is needed after a binary swap)
docker exec -d <container> dbr container-daemonThe automated test script is fully self-contained — it spins up its own
minimal Alpine test container (tests/e2e/), runs all tests against it, and
tears it down on exit. No external devcontainers are required or touched.
# Full cycle: build both binaries, deploy to test container, run all tests
scripts/dev-test.sh
# Skip builds (use existing binaries)
scripts/dev-test.sh --skip-buildThe script tests:
- Test container lifecycle (compose up, health check, compose down)
- Host daemon startup and control port binding
- Container daemon registration
- Automatic port detection and forwarding
- TCP data transfer through forwarded ports
- Port cleanup when listeners stop
- Browser URL opening (OpenUrl protocol flow)
dbr ensureidempotency
Run this after making changes to validate nothing is broken. It exits non-zero if any test fails.
See docs/development.md for the full development guide.
- Define the variant in
src/protocol.rs→Messageenum. Add serde attributes. Include///doc comment. - Add serialization tests in the
#[cfg(test)]section ofprotocol.rs— roundtrip test at minimum. - Handle in the receiving daemon:
- Container-originated messages → handle in
src/host/mod.rshandle_container_messages() - Host-originated messages → handle in
src/container/mod.rsrun_session()select loop - CLI-originated messages → handle in the
first_msgmatch inhandle_control_connection()
- Container-originated messages → handle in
- Send from the originating side — add the send call in the appropriate location.
- Add integration test in
tests/integration/forwarding.rsvalidating the full round-trip.
- Add the variant to
Commandenum insrc/cli.rswith clap attributes (#[command(name = "...")],#[arg(...)]). - Add dispatch in
src/main.rsmatch cli.command { ... }— create the tokio runtime and call the implementation. - Implement the handler — either inline in
main.rsfor simple commands, or in the appropriate module (e.g.,host/ensure.rs). - Test — unit test the handler logic, integration test the CLI round-trip if it touches the host daemon.
- Add
Protocol::Udpvariant toprotocol.rs - Add a UDP scanner alongside TCP in
container/scanner.rs(parse/proc/net/udp) - Add UDP listener management in
host/listener.rs(tokioUdpSocket) - Add UDP proxying in
host/proxy.rs(association-based, not connection-based)
- Follow "How to add a new protocol message" above
- If it requires state, add fields to
ContainerStateorHostStateinhost/mod.rs
- Host-to-container Unix socket forwarding is fully implemented
host/socket_scanner.rsdiscovers sockets via glob patterns configured in[socket_forwarding]TOML sectionSocketForward/SocketUnforwardmessages manage lifecycle;SocketConnectRequesttriggers reverse data connectionscontainer/socket.rscreates mirror sockets and bridges connections back to the host- To add new socket types or modify discovery, edit
socket_scanner.rswatch patterns andconfig.rssocket forwarding fields
- Add a new message type (e.g.,
CopyToClipboard) - Handle in
host/mod.rshandle_container_messages()match arm - Implement the host action (similar pattern to
host/browser.rs) - Validate/sanitize input on the host side
These MUST be maintained in all changes:
- Two-tier binding model — Control and data listeners (
ControlListener::bind, data listener inhost/mod.rs) use auto-detected bind addresses:0.0.0.0when Docker is detected,127.0.0.1otherwise. Overridable with--bind-addr(explicit address) or--no-docker-detect(force loopback). Forwarded per-port listeners (bind_loopbackinhost/listener.rs) must always bind to127.0.0.1or[::1]. Never bind forwarded ports to0.0.0.0or[::]. - URL scheme validation — Only
http://andhttps://URLs accepted for browser opening. Validated in bothcontainer/browser.rsandhost/browser.rs. - URL length cap — 2048 characters maximum.
- Rate limiting — Browser opens capped at 5/sec (sliding window in
host/browser.rs). - Message size limit — Control messages capped at 64KB (
MAX_MESSAGE_SIZEincontrol.rs). Bounded reads prevent OOM. - No Docker socket access — Container daemon reads only
/proc/net/tcp(world-readable). - No command injection — URLs passed as arguments to
open/xdg-openviaCommand::new().arg(), never via shell interpolation. - No
unsafeblocks — no exceptions. - Resource limits — Max 64 containers (
MAX_CONTAINERS), max 128 forwards per container (MAX_FORWARDS_PER_CONTAINER), max 1024 pending connections (MAX_PENDING). - Token authentication — Random 64-char hex token required on Register (configurable via
--no-auth). Token stored at~/.config/dbr/auth-tokenwith 0600 perms. Container daemon resolves token via--auth-tokenflag,DCBRIDGE_AUTH_TOKENenv var, or--auth-token-filepath. - Socket path allowlist — Only sockets matching configured
watch_pathsglobs are forwarded. Default empty (no sockets forwarded unless explicitly configured). - No symlink following in scanner — Socket scanner uses
lstat(symlink_metadata), never follows symlinks. - Socket path length limit — 108 characters maximum (Unix
sun_pathlimit). - Mirror socket permissions — Container mirror sockets created with mode 0600.
- Max socket forwards — Default 16 per scanner (configurable via
max_sockets).
- Protocol roundtrip tests (
protocol.rs) — serialize/deserialize every message type, verify tagged JSON format, test malformed input. - Scanner fixtures (
scanner.rs) — hardcoded/proc/net/tcpcontent with known hex ports, test LISTEN state filtering, malformed line handling, deduplication across tcp/tcp6. - Filter logic (
filter.rs) — exclude, include, process regex,devcontainer.jsonforwardPorts, combined filters. - Control framing (
control.rs) — read/write roundtrip via in-memory buffer, EOF detection, oversized message rejection, listener/client connection roundtrip. - URL validation (
container/browser.rs,host/browser.rs) — scheme validation, length limits, port rewriting. - Config loading (
config.rs) — defaults, env var override, TOML file loading, precedence layering.
- Start real host daemon in-process on random ports
- Test full Register → Forward → ForwardAck → Unforward lifecycle
- Test cleanup on container disconnect (drop connection, verify listeners torn down)
- Test multi-container port conflict resolution
- Test full reverse proxy pipeline with echo server
- Test data port handshake parsing
- Test reconnection/re-registration
- Test ConnectFailed graceful handling
- Test bridge timeout when no data connection arrives
- Self-contained: spins up a minimal Alpine container via
docker compose, no external devcontainers needed - Container definition:
tests/e2e/Dockerfile(Alpine 3.21 + python3 + bash) andtests/e2e/docker-compose.yml(projectdbr-e2e, servicedbr-test-app) - Tests the full pipeline: build → deploy → host daemon → container daemon registration → port scan detection → TCP forwarding + data transfer → browser URL opening → idempotency → cleanup
- Container is torn down automatically via cleanup trap (
docker compose down)
- Use
find_free_port()(bind to port 0, return assigned port) to avoid port conflicts between parallel tests - Use
tcp_pair()helper for creating connected TCP stream pairs - Use
tokio::time::timeoutto prevent hung tests - Host daemon started with
exit_on_idle: truefor self-cleanup, orabort()for tests that need manual control
After making code changes, follow this sequence to validate:
cargo test
cargo clippy -- -D warnings
cargo fmt --checkscripts/dev-test.shThis builds both binaries, starts a self-contained test container, deploys to it, and runs all end-to-end tests. The test container is torn down automatically on exit. No external devcontainers are required. It exits non-zero on any failure.
The build-deploy-test cycle uses two different build approaches:
- Host binary (macOS):
cargo build --release— standard Cargo build, produces a macOS arm64 binary - Container binary (Linux):
docker run --rm -v $(pwd):/src -w /src rust:1.93-alpine sh -c 'apk add musl-dev && cargo build --release'— builds inside an Alpine container, produces a Linux aarch64 binary
The cross crate is broken on Apple Silicon for the aarch64-unknown-linux-musl target. Always use the Docker approach for Linux binaries.
For fast iteration when only host-side code changed:
cargo build --release && ./target/release/dbr host-daemon --log-level debugFor container-side changes, rebuild the Linux binary and redeploy:
docker run --rm -v "$(pwd)":/src -w /src rust:1.93-alpine sh -c 'apk add musl-dev && cargo build --release'
docker cp ./target/release/dbr <container>:/usr/local/bin/dbr
docker exec <container> pkill -f "dbr container-daemon" || true
docker exec -d <container> dbr container-daemonOr use scripts/dev-test.sh --skip-build to re-test with existing binaries after a manual redeploy.
Releases involve two artifacts: the dbr binary (GitHub Releases) and the devcontainer feature (GHCR OCI artifact). See docs/development.md for the full guide.
-
Tag and push — the
release.ymlworkflow builds binaries, generates checksums, and creates a GitHub Release automatically:git tag vX.Y.Z git push origin vX.Y.Z
The workflow attaches all four platform binaries, their
.sha256checksums, andscripts/install.shto the release. -
Publish the devcontainer feature (only if the feature definition changed) — go to Actions → Publish Feature → Run workflow.
-
Verify:
gh release view vX.Y.Z # Confirm: 4 binaries, 4 .sha256 files, install.sh (9 assets total)
scripts/install.sh is the host install script referenced by README and CLI guide (curl -fsSL .../releases/latest/download/install.sh | bash). The release workflow includes it automatically. If creating a release manually, upload it explicitly:
gh release upload vX.Y.Z scripts/install.sh