Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 1 minutes and 20 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughReaper was decoupled from Executor and now uses an optional onTxsSubmitted callback; its loop changed from ticker-driven SubmitTxs() to drain-and-backoff mempool submission. CacheManager gained a bulk SetTxsSeen method. Multiple components (Executor, Pruner, Submitter, Syncer) now guard against double-start. Tests and docs updated accordingly. Changes
Sequence DiagramssequenceDiagram
participant Reaper
participant Mempool
participant Cache
participant Sequencer
participant Callback
loop drainMempool (repeat, backoff on error)
Reaper->>Mempool: GetTxs() batch
Mempool-->>Reaper: transactions
Reaper->>Cache: filter unseen txs
Cache-->>Reaper: unseen transactions + hashes
alt unseen txs exist
Reaper->>Sequencer: submitFiltered() batch
alt success
Sequencer-->>Reaper: submitted count
Reaper->>Cache: SetTxsSeen(hashes) (bulk)
Reaper->>Callback: onTxsSubmitted()
Callback-->>Reaper: invoked
else error
Sequencer-->>Reaper: error
Note over Reaper: apply exponential backoff and wait
end
else no unseen txs
Note over Reaper: continue draining or wait for cleanup interval
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Claude finished @julienrbrt's task in 2m 21s —— View job Code Review
Overall the redesign is a clear improvement — decoupling the Reaper from Critical / Major1. Lifecycle rollback missing on failed startup (
|
|
The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).
|
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3236 +/- ##
==========================================
+ Coverage 61.67% 61.82% +0.14%
==========================================
Files 120 120
Lines 12635 12662 +27
==========================================
+ Hits 7793 7828 +35
+ Misses 3968 3957 -11
- Partials 874 877 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| Int("consecutive_failures", consecutiveFailures). | ||
| Dur("backoff", backoff). | ||
| Msg("reaper error, backing off") | ||
| if r.wait(backoff, nil) { |
There was a problem hiding this comment.
this is the main change, if we get txs we retry immediately. if we do not we wait for the interval
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
block/internal/executing/executor.go (1)
166-174:⚠️ Potential issue | 🟠 MajorRollback the lifecycle state when startup fails.
e.cancelis set beforeinitializeState(). If initialization fails, no worker is running, but every laterStart()now returns"executor already started". That turns transient boot failures into a permanently unusable executor instance.Suggested rollback pattern
-func (e *Executor) Start(ctx context.Context) error { +func (e *Executor) Start(ctx context.Context) (err error) { if e.cancel != nil { return errors.New("executor already started") } e.ctx, e.cancel = context.WithCancel(ctx) + defer func() { + if err != nil { + e.cancel() + e.ctx = nil + e.cancel = nil + } + }() // Initialize state - if err := e.initializeState(); err != nil { + if err = e.initializeState(); err != nil { return fmt.Errorf("failed to initialize state: %w", err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/executing/executor.go` around lines 166 - 174, The code sets e.cancel before calling initializeState(), so if initializeState() fails the executor remains marked as started (e.cancel != nil) and future Start() calls fail; update Start() to defer a rollback that clears e.ctx and e.cancel when initialization fails: after setting e.ctx, e.cancel = context.WithCancel(ctx), add a defer function (or conditional cleanup on error) that calls e.cancel() and sets e.ctx, e.cancel back to nil if initializeState() returns an error, ensuring the executor's lifecycle fields (e.ctx and e.cancel) are only left set on successful initialization in initializeState().
🧹 Nitpick comments (1)
block/internal/reaping/reaper_test.go (1)
192-208: Test name doesn't match what's being verified.The test name
TestReaper_HashComputedOnceimplies verification that hash computation occurs only once, but the test only checks that the expected hash exists in the cache after draining. This doesn't actually prove the hash was computed once versus multiple times.Consider either:
- Renaming to something like
TestReaper_CacheLookupByHashto reflect what's actually tested, or- If single-computation is the intent, instrumenting the hash function to verify call count.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@block/internal/reaping/reaper_test.go` around lines 192 - 208, The test TestReaper_HashComputedOnce is misnamed because it only asserts the hash is present in the cache after calling env.reaper.drainMempool() and does not verify the hash function was invoked exactly once; either rename the test to reflect its behavior (e.g., TestReaper_CacheLookupByHash) or change the test to actually assert single computation by instrumenting hashTx (or injecting a hash function into reaper) and counting calls around env.reaper.drainMempool(), then assert the call count equals one while still checking env.cache.IsTxSeen(expectedHash).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@block/internal/syncing/syncer.go`:
- Around line 166-168: The Start/Stop lifecycle is unsafe because Stop() clears
s.cancel but closes s.heightInCh, so a subsequent Start() can reuse a closed
channel; fix by making Start fully reinitialize restart-sensitive state instead
of relying on s.cancel: in Start() recreate channels (e.g., s.heightInCh, any
other closed/internal channels) and reset any state used by processLoop before
clearing/setting s.cancel, or alternatively make Stop() leave s.cancel set to
prevent restarts; update the guard around s.cancel and the Stop() implementation
to either preserve s.cancel for single-use Syncer or ensure all channels and
state are recreated in Start() (refer to Start, Stop, s.cancel, s.heightInCh,
and processLoop).
In `@docs/adr/adr-021-lazy-aggregation.md`:
- Line 92: The ADR text references Manager.NotifyNewTransactions but the code
now wires reaping.NewReaper(..., executor.NotifyNewTransactions), so update the
document to reflect the actual callback owner: replace references to
Manager.NotifyNewTransactions with executor.NotifyNewTransactions and explain
that the Reaper calls the executor callback (executor.NotifyNewTransactions)
which signals txNotifyCh; mention reaping.NewReaper and txNotifyCh to make the
connection explicit.
---
Outside diff comments:
In `@block/internal/executing/executor.go`:
- Around line 166-174: The code sets e.cancel before calling initializeState(),
so if initializeState() fails the executor remains marked as started (e.cancel
!= nil) and future Start() calls fail; update Start() to defer a rollback that
clears e.ctx and e.cancel when initialization fails: after setting e.ctx,
e.cancel = context.WithCancel(ctx), add a defer function (or conditional cleanup
on error) that calls e.cancel() and sets e.ctx, e.cancel back to nil if
initializeState() returns an error, ensuring the executor's lifecycle fields
(e.ctx and e.cancel) are only left set on successful initialization in
initializeState().
---
Nitpick comments:
In `@block/internal/reaping/reaper_test.go`:
- Around line 192-208: The test TestReaper_HashComputedOnce is misnamed because
it only asserts the hash is present in the cache after calling
env.reaper.drainMempool() and does not verify the hash function was invoked
exactly once; either rename the test to reflect its behavior (e.g.,
TestReaper_CacheLookupByHash) or change the test to actually assert single
computation by instrumenting hashTx (or injecting a hash function into reaper)
and counting calls around env.reaper.drainMempool(), then assert the call count
equals one while still checking env.cache.IsTxSeen(expectedHash).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: aa4a7354-a77a-4989-a619-2024cd382a3b
📒 Files selected for processing (10)
block/components.goblock/internal/cache/manager.goblock/internal/executing/executor.goblock/internal/pruner/pruner.goblock/internal/reaping/reaper.goblock/internal/reaping/reaper_test.goblock/internal/submitting/submitter.goblock/internal/submitting/submitter_test.goblock/internal/syncing/syncer.godocs/adr/adr-021-lazy-aggregation.md
💤 Files with no reviewable changes (1)
- block/internal/submitting/submitter_test.go
| if s.cancel != nil { | ||
| return errors.New("syncer already started") | ||
| } |
There was a problem hiding this comment.
Restart is still unsafe after Stop().
Stop() clears s.cancel but also closes s.heightInCh, so a second Start() passes this guard and reuses a closed channel. That can make processLoop spin on an always-ready receive and any later send to heightInCh panic. Either keep Syncer single-use by not clearing s.cancel, or fully recreate restart-sensitive state before allowing another start.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@block/internal/syncing/syncer.go` around lines 166 - 168, The Start/Stop
lifecycle is unsafe because Stop() clears s.cancel but closes s.heightInCh, so a
subsequent Start() can reuse a closed channel; fix by making Start fully
reinitialize restart-sensitive state instead of relying on s.cancel: in Start()
recreate channels (e.g., s.heightInCh, any other closed/internal channels) and
reset any state used by processLoop before clearing/setting s.cancel, or
alternatively make Stop() leave s.cancel set to prevent restarts; update the
guard around s.cancel and the Stop() implementation to either preserve s.cancel
for single-use Syncer or ensure all channels and state are recreated in Start()
(refer to Start, Stop, s.cancel, s.heightInCh, and processLoop).
| } | ||
| } | ||
| ``` | ||
| A dedicated lazy aggregation loop has been implemented with dual timer mechanisms. The `lazyTimer` ensures blocks are produced at regular intervals even during network inactivity, while the `blockTimer` handles normal block production when transactions are available. Transaction notifications from the `Reaper` to the `Manager` are now handled via the `txNotifyCh` channel: when the `Reaper` detects new transactions, it calls `Manager.NotifyNewTransactions()`, which performs a non-blocking signal on this channel. See the tests in `block/lazy_aggregation_test.go` for verification of this behavior. |
There was a problem hiding this comment.
The callback owner is outdated in the ADR.
The implementation now wires reaping.NewReaper(..., executor.NotifyNewTransactions) in block/components.go, so this should reference the executor callback rather than Manager.NotifyNewTransactions(). As written, the design doc no longer matches the code path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/adr/adr-021-lazy-aggregation.md` at line 92, The ADR text references
Manager.NotifyNewTransactions but the code now wires reaping.NewReaper(...,
executor.NotifyNewTransactions), so update the document to reflect the actual
callback owner: replace references to Manager.NotifyNewTransactions with
executor.NotifyNewTransactions and explain that the Reaper calls the executor
callback (executor.NotifyNewTransactions) which signals txNotifyCh; mention
reaping.NewReaper and txNotifyCh to make the connection explicit.
Overview
Improve reaper by effectively having no interval during tx burst
Summary by CodeRabbit
Bug Fixes
Improvements
Tests
Documentation