Context
We experimented with adding multi_get at the ReadonlyKV/storage-wrapper level in evolve_storage, but the current backend (commonware_storage::qmdb in use here) exposes single-key get and write batching, not native read batching.
That means wrapper-level multi_get can reduce some overhead, but remains fundamentally N x single-key reads.
Problem
Hot read paths (cache warming, account/state lookups, RPC-adjacent storage reads) need deterministic, ordered batch reads without paying per-key API/runtime overhead repeatedly.
Proposal
- Add native read-side batch API support in storage backend integration:
- Target API shape in our layer:
multi_get(&[Vec<u8>]) -> Vec<Option<Vec<u8>>> (input-order preserving).
- If upstream QMDB adds a native batch read, use it directly.
- If not, implement the best possible pipelined/parallel strategy with bounded concurrency and deterministic output ordering.
- Keep cache behavior explicit:
- Resolve cache hits first.
- Fetch misses in batch path.
- Backfill positive and negative cache deterministically.
- Adopt in first consumer path:
- cache warming (
warm_cache)
- then one RPC/state query hot path
- Add benchmarks + regression gates:
- compare
get loop vs multi_get for cold/warm cache
- include key cardinality and hit-ratio variants
Acceptance Criteria
- Deterministic output order for any key vector (including duplicates).
- No behavior regressions vs existing single-key
get.
- Measurable latency improvement on representative read-heavy workloads.
- Benchmarks checked into repo and runnable in CI/dev.
Notes
- This should be done as small reversible steps: API -> backend impl -> one consumer -> benchmarks.
- Keep memory bounds explicit for any parallel/pipelined implementation.
Context
We experimented with adding
multi_getat theReadonlyKV/storage-wrapper level inevolve_storage, but the current backend (commonware_storage::qmdbin use here) exposes single-keygetand write batching, not native read batching.That means wrapper-level
multi_getcan reduce some overhead, but remains fundamentally N x single-key reads.Problem
Hot read paths (cache warming, account/state lookups, RPC-adjacent storage reads) need deterministic, ordered batch reads without paying per-key API/runtime overhead repeatedly.
Proposal
multi_get(&[Vec<u8>]) -> Vec<Option<Vec<u8>>>(input-order preserving).warm_cache)getloop vsmulti_getfor cold/warm cacheAcceptance Criteria
get.Notes