Skip to content

fix: handle and propagate errors in event channel monitors#573

Merged
xdustinface merged 3 commits intov0.42-devfrom
fix/fatal-monitor-errors
Apr 1, 2026
Merged

fix: handle and propagate errors in event channel monitors#573
xdustinface merged 3 commits intov0.42-devfrom
fix/fatal-monitor-errors

Conversation

@xdustinface
Copy link
Copy Markdown
Collaborator

@xdustinface xdustinface commented Mar 22, 2026

  • Add on_failure cancellation token to spawn_broadcast_monitor and spawn_progress_monitor to signal the run loop on fatal monitor errors
  • Treat Lagged broadcast errors as fatal, lost events are unrecoverable at the moment
  • Distinguish expected vs unexpected channel close by checking the shutdown token
  • Run loop watches monitor_failure.cancelled() and exits with ChannelFailure error

Summary by CodeRabbit

  • Bug Fixes

    • Event monitors now report unexpected channel closures or lag, cancel monitoring, and cause the main run loop to exit instead of continuing silently.
    • Progress monitor now reports channel failures and stops promptly.
  • Tests

    • Tests updated to assert a single failure message on unexpected channel drop/close/lag and that lagged broadcasts emit no sync/progress events.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 22, 2026

📝 Walkthrough

Walkthrough

Monitor tasks now take an mpsc::Sender<String> (on_failure) and, on unexpected receiver closure/lag or watch closure, log and attempt to try_send a failure message via that sender and then exit their loops. The sync coordinator creates an mpsc failure channel, listens for failure messages in its run loop, and returns SpvError::ChannelFailure(...) when a failure message is received; handler.on_error calls were moved to after monitors are joined.

Changes

Cohort / File(s) Summary
Monitor logic & API
dash-spv/src/client/event_handler.rs
Added on_failure: mpsc::Sender<String> parameter to spawn_broadcast_monitor and spawn_progress_monitor. On unexpected broadcast::Receiver Closed/Lagged or watch::Receiver changed() errors, monitors now log and on_failure.try_send(...) a descriptive message (unless shutdown was cancelled), then break the loop.
Coordinator integration
dash-spv/src/client/sync_coordinator.rs
Created monitor_failure_tx/monitor_failure_rx mpsc channel; pass clones of monitor_failure_tx into spawned monitors. run now selects on monitor_failure_rx and returns SpvError::ChannelFailure("event monitor", msg) when a failure message arrives. Moved handler.on_error invocation to after monitor tasks are joined.
Tests
dash-spv/src/client/...tests*
Unit tests updated to supply the mpsc sender, renamed scenarios to indicate unexpected drop/close/lag, and assert that monitors emit failure messages and avoid dispatching sync/progress events in lagged/closed cases; ensure handler error count semantics remain correct.

Sequence Diagram

sequenceDiagram
    participant Coordinator as Coordinator
    participant BroadcastMonitor as Broadcast Monitor
    participant ProgressMonitor as Progress Monitor
    participant Handler as Event Handler
    participant FailureChan as Failure mpsc

    BroadcastMonitor->>BroadcastMonitor: detect RecvError::Closed / RecvError::Lagged
    BroadcastMonitor->>Handler: log / (no immediate on_error)
    BroadcastMonitor->>FailureChan: try_send("broadcast monitor: ...")
    BroadcastMonitor->>BroadcastMonitor: break loop

    ProgressMonitor->>ProgressMonitor: detect watch changed() closure/error
    ProgressMonitor->>Handler: log / (no immediate on_error)
    ProgressMonitor->>FailureChan: try_send("progress monitor channel closed unexpectedly")
    ProgressMonitor->>ProgressMonitor: break loop

    Coordinator->>FailureChan: await monitor_failure_rx.recv()
    FailureChan-->>Coordinator: deliver failure message
    Coordinator->>Coordinator: return SpvError::ChannelFailure(...), join monitors, call handler.on_error afterwards
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Poem

🐰 I watched the channels hop and sigh,
A string of errors fluttered by,
I sent a note, then took a leap,
Quietly closed the watch I keep,
And nibbled logs beneath the sky.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and accurately summarizes the main change: adding error handling and propagation in event channel monitors.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/fatal-monitor-errors

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@xdustinface
Copy link
Copy Markdown
Collaborator Author

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 22, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv/src/client/sync_coordinator.rs (1)

53-85: ⚠️ Potential issue | 🟠 Major

Return the real monitor failure instead of synthesizing "broadcast receiver lagged".

All four monitors cancel the same token, but this branch always constructs ChannelFailure("event monitor", "broadcast receiver lagged"). That misreports progress-channel failures and unexpected broadcast closes, and it drops the more specific failure message already produced by the monitor task. Please propagate the concrete failure back into run() and return that here. As per coding guidelines, "Use proper error types (thiserror) and propagate errors appropriately in Rust code".

Also applies to: 106-111

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/client/sync_coordinator.rs` around lines 53 - 85, The code in
run() currently maps any monitor cancellation to a synthesized
ChannelFailure("event monitor", "broadcast receiver lagged") which hides the
real cause; change the logic that handles monitor_shutdown/monitor_failure so it
returns the concrete failure produced by the monitor tasks (propagate the actual
error from monitor_failure) instead of constructing the static "broadcast
receiver lagged" message. Locate the monitor spawns (spawn_broadcast_monitor and
spawn_progress_monitor) and the branch that creates ChannelFailure("event
monitor", "broadcast receiver lagged") and replace it so run() awaits or
inspects monitor_failure (the propagated error type) and returns that error
(preserving its thiserror type) when cancelling the token; ensure the same fix
is applied for the second occurrence referenced (lines ~106-111).
🧹 Nitpick comments (1)
dash-spv/src/client/event_handler.rs (1)

242-263: Add a teardown test that closes the channel after shutdown.cancel().

The new guards on Line 55 and Line 98 only matter when the channel closes after shutdown has been requested. The added tests cover unexpected drops and explicit shutdown separately, but they still don't exercise that path, so a regression there would go unnoticed. As per coding guidelines, "Write unit tests for new functionality in Rust code" and "Place unit tests alongside code with #[cfg(test)] attribute".

Also applies to: 319-337

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/client/event_handler.rs` around lines 242 - 263, Add a new unit
test placed alongside the existing tests (under #[cfg(test)]) that exercises the
path where the broadcast channel is closed after shutdown has been requested:
create a broadcast::channel::<SyncEvent>, spawn the same spawn_broadcast_monitor
using the RecordingHandler, call shutdown.cancel() first, then drop the sender
to close the channel, await the task, and assert the handler/error and
on_failure behavior matches expectations; reference the existing test
broadcast_monitor_fails_on_unexpected_channel_close and reuse its setup
(RecordingHandler, shutdown CancellationToken, on_failure CancellationToken, and
spawn_broadcast_monitor invocation) but invert the order of shutdown.cancel()
and drop(tx) so the guards on lines 55 and 98 are exercised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@dash-spv/src/client/sync_coordinator.rs`:
- Around line 53-85: The code in run() currently maps any monitor cancellation
to a synthesized ChannelFailure("event monitor", "broadcast receiver lagged")
which hides the real cause; change the logic that handles
monitor_shutdown/monitor_failure so it returns the concrete failure produced by
the monitor tasks (propagate the actual error from monitor_failure) instead of
constructing the static "broadcast receiver lagged" message. Locate the monitor
spawns (spawn_broadcast_monitor and spawn_progress_monitor) and the branch that
creates ChannelFailure("event monitor", "broadcast receiver lagged") and replace
it so run() awaits or inspects monitor_failure (the propagated error type) and
returns that error (preserving its thiserror type) when cancelling the token;
ensure the same fix is applied for the second occurrence referenced (lines
~106-111).

---

Nitpick comments:
In `@dash-spv/src/client/event_handler.rs`:
- Around line 242-263: Add a new unit test placed alongside the existing tests
(under #[cfg(test)]) that exercises the path where the broadcast channel is
closed after shutdown has been requested: create a
broadcast::channel::<SyncEvent>, spawn the same spawn_broadcast_monitor using
the RecordingHandler, call shutdown.cancel() first, then drop the sender to
close the channel, await the task, and assert the handler/error and on_failure
behavior matches expectations; reference the existing test
broadcast_monitor_fails_on_unexpected_channel_close and reuse its setup
(RecordingHandler, shutdown CancellationToken, on_failure CancellationToken, and
spawn_broadcast_monitor invocation) but invert the order of shutdown.cancel()
and drop(tx) so the guards on lines 55 and 98 are exercised.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e4a090ec-4d14-4e67-b83c-1db7a7f43090

📥 Commits

Reviewing files that changed from the base of the PR and between becff66 and 31549f0.

📒 Files selected for processing (2)
  • dash-spv/src/client/event_handler.rs
  • dash-spv/src/client/sync_coordinator.rs

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 22, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 22, 2026

Codecov Report

❌ Patch coverage is 92.06349% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 67.06%. Comparing base (5db46b4) to head (00f6316).
⚠️ Report is 7 commits behind head on v0.42-dev.

Files with missing lines Patch % Lines
dash-spv/src/client/sync_coordinator.rs 72.22% 5 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##           v0.42-dev     #573      +/-   ##
=============================================
+ Coverage      67.02%   67.06%   +0.03%     
=============================================
  Files            320      320              
  Lines          67249    67293      +44     
=============================================
+ Hits           45073    45128      +55     
+ Misses         22176    22165      -11     
Flag Coverage Δ
core 75.21% <ø> (ø)
ffi 35.48% <ø> (+<0.01%) ⬆️
rpc 19.92% <ø> (ø)
spv 83.82% <92.06%> (+0.12%) ⬆️
wallet 66.63% <ø> (ø)
Files with missing lines Coverage Δ
dash-spv/src/client/event_handler.rs 95.83% <100.00%> (+0.11%) ⬆️
dash-spv/src/client/sync_coordinator.rs 80.28% <72.22%> (-2.18%) ⬇️

... and 4 files with indirect coverage changes

@xdustinface xdustinface marked this pull request as draft March 22, 2026 07:55
@xdustinface xdustinface force-pushed the fix/fatal-monitor-errors branch from 31549f0 to 1f3f4c0 Compare March 22, 2026 07:56
@xdustinface xdustinface force-pushed the refactor/move-callbacks-to-crates branch from becff66 to 9ec7541 Compare March 23, 2026 02:46
@github-actions github-actions bot added the merge-conflict The PR conflicts with the target branch. label Mar 23, 2026
@github-actions
Copy link
Copy Markdown

This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them.

@xdustinface xdustinface force-pushed the refactor/move-callbacks-to-crates branch 5 times, most recently from aaddece to 23accc0 Compare March 23, 2026 06:04
Base automatically changed from refactor/move-callbacks-to-crates to v0.42-dev March 25, 2026 02:13
@xdustinface xdustinface dismissed coderabbitai[bot]’s stale review March 25, 2026 02:13

The base branch was changed.

- Add `on_failure` cancellation token to `spawn_broadcast_monitor` and `spawn_progress_monitor` to signal the run loop on fatal monitor errors
- Treat Lagged broadcast errors as fatal, lost events are unrecoverable at the moment
- Distinguish expected vs unexpected channel close by checking the shutdown token
- Run loop watches `monitor_failure.cancelled()` and exits with `ChannelFailure` error
@xdustinface xdustinface force-pushed the fix/fatal-monitor-errors branch from 1f3f4c0 to f5d1e71 Compare March 30, 2026 23:35
@github-actions github-actions bot removed the merge-conflict The PR conflicts with the target branch. label Mar 30, 2026
@xdustinface xdustinface marked this pull request as ready for review March 30, 2026 23:50
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/src/client/event_handler.rs`:
- Around line 68-74: The Lagged arm handling (matching
broadcast::error::RecvError::Lagged) currently unconditionally logs, calls
handler.on_error and on_failure.cancel(), which can spuriously trigger a
ChannelFailure during shutdown; wrap the Lagged branch with the same shutdown
check used in the Closed handler (i.e., skip the error handling and do not call
on_failure.cancel() when shutdown has already been requested) so that when
shutdown is active the code no-ops instead of propagating failure.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 91a3b02e-d6d3-4869-a357-9c76c61d224c

📥 Commits

Reviewing files that changed from the base of the PR and between 31549f0 and f5d1e71.

📒 Files selected for processing (2)
  • dash-spv/src/client/event_handler.rs
  • dash-spv/src/client/sync_coordinator.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • dash-spv/src/client/sync_coordinator.rs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv/src/client/event_handler.rs (1)

271-308: ⚠️ Potential issue | 🟡 Minor

Add a regression test for the shutdown+lagged path.

This test covers the fatal lagged case, but the new guard at Line 68 is the subtle part of the fix. Please add a case that cancels shutdown before the lagged recv and asserts error_count == 0 and on_failure stays unset.

As per coding guidelines, "Write unit tests for new functionality".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/client/event_handler.rs` around lines 271 - 308, Add a new tokio
test similar to broadcast_monitor_exits_on_lagged_receiver that exercises the
shutdown+lagged path: create a broadcast channel and RecordingHandler, send
enough SyncEvent messages to overflow the buffer, but before awaiting the
monitor spawn_broadcast_monitor call cancel the shutdown CancellationToken
(shutdown.cancel()), then await the task and assert that handler.sync_count is
0, handler.error_count is 0, and on_failure.is_cancelled() is false; use the
same symbols used in the existing test (spawn_broadcast_monitor,
RecordingHandler, CancellationToken, SyncEvent, handler.error_count) so the
guard at line 68 is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/src/client/event_handler.rs`:
- Around line 60-74: The monitor error branches call on_failure.cancel() even
during a normal shutdown because they only check shutdown.is_cancelled(); update
the logic so Closed/Lagged arms treat a cancelled shutdown as a normal
termination and do not call on_failure.cancel(): in event_handler.rs adjust the
guards around Err(broadcast::error::RecvError::Closed) and Err(...::Lagged) (the
uses of shutdown.is_cancelled(), handler.on_error(), and on_failure.cancel()) to
skip on_failure.cancel() when the monitor shutdown token is cancelled or when
the outer coordinator run token has been cancelled; alternatively, ensure the
coordinator (sync_coordinator.rs run loop) cancels monitor_shutdown before
exiting so the monitor sees shutdown.is_cancelled() and does not call
on_failure.cancel().
- Line 45: Replace the bare CancellationToken "on_failure" with a channel that
carries the concrete failure (e.g., oneshot::Sender<SpvError> or a watch/tx of
Option<SpvError>) and stop calling on_failure.cancel(); instead send the
SpvError (or monitor name+reason) through that channel where monitors currently
call on_failure.cancel(). Update the consumers (the coordinator path that
currently maps a cancellation into ChannelFailure and then calls
handler.on_error) to take the received SpvError directly and invoke the single
error callback (handler.on_error) with that error, removing the
duplicate/mislabeled ChannelFailure branch; change references to on_failure, the
monitor failure sites, and the coordinator error handling to use the new
send/receive symbol names.

---

Outside diff comments:
In `@dash-spv/src/client/event_handler.rs`:
- Around line 271-308: Add a new tokio test similar to
broadcast_monitor_exits_on_lagged_receiver that exercises the shutdown+lagged
path: create a broadcast channel and RecordingHandler, send enough SyncEvent
messages to overflow the buffer, but before awaiting the monitor
spawn_broadcast_monitor call cancel the shutdown CancellationToken
(shutdown.cancel()), then await the task and assert that handler.sync_count is
0, handler.error_count is 0, and on_failure.is_cancelled() is false; use the
same symbols used in the existing test (spawn_broadcast_monitor,
RecordingHandler, CancellationToken, SyncEvent, handler.error_count) so the
guard at line 68 is covered.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e7f34bb8-accd-40e5-bff5-fab9e84e7347

📥 Commits

Reviewing files that changed from the base of the PR and between f5d1e71 and fc8f5d2.

📒 Files selected for processing (1)
  • dash-spv/src/client/event_handler.rs

…r propagation

Use an `mpsc::channel` instead of a `CancellationToken` for `on_failure` so monitors
propagate the actual error message to the coordinator. The coordinator is now the single
source of `handler.on_error()` calls, eliminating double-reporting. Also cancel
`monitor_shutdown` immediately after the loop exits to close the shutdown race window.

Addresses CodeRabbit review comment on PR #573
#573 (comment)
#573 (comment)
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv/src/client/sync_coordinator.rs (1)

52-84: ⚠️ Potential issue | 🟠 Major

Monitor task panics bypass the failure channel and are silently dropped.

The monitor_failure_rx channel only catches explicit try_send() failures from within the monitor helpers. If a handler callback panics inside a spawned monitor task, that task terminates with a JoinError, but line 120 discards all JoinHandle results with let _ = tokio::join!(...), so the panic goes unobserved. Additionally, if all sender clones are dropped, recv() returns None, which the select! block at lines 104–114 ignores entirely.

At minimum, capture join results and propagate any task failures:

Example fix
-        let _ = tokio::join!(sync_task, network_task, wallet_task, progress_task);
+        let (sync_result, network_result, wallet_result, progress_result) =
+            tokio::join!(sync_task, network_task, wallet_task, progress_task);
+
+        let join_error = [sync_result, network_result, wallet_result, progress_result]
+            .into_iter()
+            .find_map(Result::err)
+            .map(|join_err| {
+                SpvError::ChannelFailure(
+                    "event monitor".into(),
+                    format!("monitor task exited unexpectedly: {join_err}"),
+                )
+            });
+
+        let error = error.or(join_error);

Violates guideline: "Use proper error types with thiserror crate and propagate errors appropriately."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/client/sync_coordinator.rs` around lines 52 - 84, The spawned
monitor tasks created by spawn_broadcast_monitor and spawn_progress_monitor can
panic or be cancelled and currently their JoinHandle results are ignored by let
_ = tokio::join!(...), so capture each JoinHandle result, check for
panics/JoinError and forward a descriptive error via monitor_failure_tx (or
monitor_failure_rx consumer) using the crate's error type (or wrap with
thiserror) so failures propagate; also handle the case where recv() returns None
inside the monitors by treating it as a shutdown/error condition and sending a
failure notification instead of silently ignoring it; ensure monitor_shutdown is
used to gracefully stop tasks but any unexpected task termination is sent
through monitor_failure_tx to be handled upstream.
🧹 Nitpick comments (1)
dash-spv/src/client/sync_coordinator.rs (1)

97-124: Please add a coordinator-level regression test for this branch.

The added tests stop at event_handler.rs, but the contract this PR is fixing is finalized here: run() should exit with SpvError::ChannelFailure, cancel monitor_shutdown before teardown, and call on_error exactly once from the coordinator. A focused run() test that forces one monitor failure would pin that behavior.

As per coding guidelines, Write unit tests for new functionality and Implement comprehensive unit tests in-module for individual components using #[cfg(test)] and integration tests in the tests/ directory.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/client/sync_coordinator.rs` around lines 97 - 124, Add a
coordinator-level regression test that invokes the coordinator's run() loop and
forces a single monitor failure to verify the fixed contract: the test should
arrange for monitor_failure_rx to deliver a message so run() returns
Some(SpvError::ChannelFailure(...)), verify monitor_shutdown.cancel() is called
(i.e., monitors observe cancellation) before teardown completes, and assert
handler.on_error(...) is called exactly once with the error string; use the
actual coordinator struct and run() method, inject a mock/fake handler that
records on_error invocations, and trigger the monitor failure path to exercise
and assert these behaviors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@dash-spv/src/client/sync_coordinator.rs`:
- Around line 52-84: The spawned monitor tasks created by
spawn_broadcast_monitor and spawn_progress_monitor can panic or be cancelled and
currently their JoinHandle results are ignored by let _ = tokio::join!(...), so
capture each JoinHandle result, check for panics/JoinError and forward a
descriptive error via monitor_failure_tx (or monitor_failure_rx consumer) using
the crate's error type (or wrap with thiserror) so failures propagate; also
handle the case where recv() returns None inside the monitors by treating it as
a shutdown/error condition and sending a failure notification instead of
silently ignoring it; ensure monitor_shutdown is used to gracefully stop tasks
but any unexpected task termination is sent through monitor_failure_tx to be
handled upstream.

---

Nitpick comments:
In `@dash-spv/src/client/sync_coordinator.rs`:
- Around line 97-124: Add a coordinator-level regression test that invokes the
coordinator's run() loop and forces a single monitor failure to verify the fixed
contract: the test should arrange for monitor_failure_rx to deliver a message so
run() returns Some(SpvError::ChannelFailure(...)), verify
monitor_shutdown.cancel() is called (i.e., monitors observe cancellation) before
teardown completes, and assert handler.on_error(...) is called exactly once with
the error string; use the actual coordinator struct and run() method, inject a
mock/fake handler that records on_error invocations, and trigger the monitor
failure path to exercise and assert these behaviors.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 558aecd1-8c96-411b-9a40-46bb8bd7a406

📥 Commits

Reviewing files that changed from the base of the PR and between fc8f5d2 and 00f6316.

📒 Files selected for processing (2)
  • dash-spv/src/client/event_handler.rs
  • dash-spv/src/client/sync_coordinator.rs

@xdustinface xdustinface requested a review from ZocoLini March 31, 2026 02:25
@github-actions github-actions bot added the ready-for-review CodeRabbit has approved this PR label Mar 31, 2026
@xdustinface xdustinface merged commit 86d9a95 into v0.42-dev Apr 1, 2026
40 of 42 checks passed
@xdustinface xdustinface deleted the fix/fatal-monitor-errors branch April 1, 2026 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-review CodeRabbit has approved this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants