Atlas is an EVM blockchain explorer (indexer + API + frontend) for ev-node based chains.
| Layer | Tech |
|---|---|
| Indexer | Rust, tokio, sqlx, alloy, tokio-postgres (binary COPY) |
| API | Rust, Axum, sqlx, tower-http |
| Database | PostgreSQL (partitioned tables) |
| Frontend | React, TypeScript, Vite, Tailwind CSS, Bun |
| Deployment | Docker Compose, nginx (unprivileged, port 8080→80) |
atlas/
├── backend/
│ ├── Cargo.toml # Workspace — all dep versions live here
│ ├── crates/
│ │ ├── atlas-common/ # Shared types, DB pool, error handling, Pagination
│ │ ├── atlas-indexer/ # Block fetcher, batch writer, metadata fetcher
│ │ └── atlas-api/ # Axum REST API
│ └── migrations/ # sqlx migrations (run at startup by both crates)
├── frontend/
│ ├── src/
│ │ ├── api/ # Typed API clients (axios)
│ │ ├── components/ # Shared UI components
│ │ ├── hooks/ # React hooks (useBlocks, useLatestBlockHeight, …)
│ │ ├── pages/ # One file per page/route
│ │ └── types/ # Shared TypeScript types
│ ├── Dockerfile # Multi-stage: oven/bun:1 → nginx-unprivileged:alpine
│ └── nginx.conf # SPA routing + /api/ reverse proxy to atlas-api:3000
├── docker-compose.yml
└── .env.example
- API pool: 20 connections,
statement_timeout = '10s'set viaafter_connecthook - Indexer pool: 20 connections (configurable via
DB_MAX_CONNECTIONS), same timeout - Binary COPY client: separate
tokio-postgresdirect connection (bypasses sqlx pool), conditional TLS based onsslmodein DATABASE_URL - Migrations: run with a dedicated 1-connection pool with no statement_timeout (index builds can take longer than 10s)
The blocks table can have 80M+ rows. OFFSET on large pages causes 30s+ full index scans. Instead:
// cursor = max_block - (page - 1) * limit — uses clamped limit(), not raw offset()
let limit = pagination.limit(); // clamped to 100
let cursor = (total_count - 1) - (pagination.page.saturating_sub(1) as i64) * limit;
// Query: WHERE number <= $cursor ORDER BY number DESC LIMIT $1total_count comes from MAX(number) + 1 (O(1), not COUNT(*)).
For large tables (transactions, addresses), use pg_class.reltuples instead of COUNT(*):
// handlers/mod.rs — get_table_count(pool, "table_name")
// Partition-aware: sums child reltuples, falls back to parent
// For tables < 100k rows: falls back to exact COUNT(*)TimeoutLayer::with_status_code(StatusCode::REQUEST_TIMEOUT, Duration::from_secs(10)) wraps all routes — returns 408 if any handler exceeds 10s.
pub struct AppState {
pub pool: PgPool,
pub rpc_url: String,
pub solc_path: String,
pub admin_api_key: Option<String>,
pub chain_id: u64, // fetched from RPC once at startup via eth_chainId
pub chain_name: String, // from CHAIN_NAME env var, defaults to "Unknown"
}- Base URL:
/api(proxied by nginx toatlas-api:3000) - Fast polling endpoint:
GET /api/height→{ block_height, indexed_at }— used by navbar every 2s - Chain status:
GET /api/status→ full chain info, fetched once on page load
- Rust: idiomatic — use
.min(),.max(),|=,+=over manual if/assign - SQL: never use
OFFSETfor large tables — use keyset/cursor pagination - Migrations: use
run_migrations(&database_url)(not&pool) to get a timeout-free connection - Frontend: uses Bun (not npm/yarn). Lockfile is
bun.lock(text, Bun ≥ 1.2). Build withbunx vite build(skips tsc type check). - Docker: frontend image uses
nginxinc/nginx-unprivileged:alpine(non-root, port 8080). API/indexer usealpinewithca-certificates. - Tests: add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file. Run withcargo test --workspace. - Commits: authored by the user only — no Codex co-author lines.
Key vars (see .env.example for full list):
| Var | Used by | Default |
|---|---|---|
DATABASE_URL |
all | required |
RPC_URL |
indexer, api | required |
CHAIN_NAME |
api | "Unknown" |
DB_MAX_CONNECTIONS |
indexer | 20 |
BATCH_SIZE |
indexer | 100 |
FETCH_WORKERS |
indexer | 10 |
ADMIN_API_KEY |
api | none |
# Start full stack
docker compose up -d
# Rebuild a single service after code changes
docker compose build atlas-api && docker compose up -d atlas-api
# Backend only (no Docker)
cd backend && cargo build --workspaceget_table_count(pool, table_name)— pass the table name, it's not hardcoded anymorerun_migrationstakes&str(database URL), not&PgPool- The blocks cursor uses
pagination.limit()(clamped), notpagination.offset()— they diverge when client sendslimit > 100 bun.locknotbun.lockb— Bun ≥ 1.2 uses text format