feat(facade): Basic RESP support in ioloopv2, part 2#6885
feat(facade): Basic RESP support in ioloopv2, part 2#6885dranikpg merged 4 commits intodragonflydb:mainfrom
Conversation
cdbaf3e to
23fd64e
Compare
|
Some tests timed out |
|
Not that are tests have good coverage, but those results are not bad |
Signed-off-by: Vlad <vlad@dragonflydb.io>
Signed-off-by: Vlad <vlad@dragonflydb.io>
Signed-off-by: Vlad <vlad@dragonflydb.io>
7a53108 to
74f31c3
Compare
|
I disabled the v2 loop for redis and kept everything as is, rounding it up here |
🤖 Augment PR SummarySummary: Extends the experimental Changes:
🤖 Was this summary useful? React with 👍 or 👎 |
|
|
||
| // If we launch while closing, it won't be awaited. Control messages will be processed on cleanup. | ||
| if (!cc_->conn_closing) { | ||
| if (!cc_->conn_closing && !ioloop_v2_) { |
There was a problem hiding this comment.
Since ioloop_v2_ no longer launches async_fb_, the early return for cc_->conn_closing can drop CheckpointMessages even after SendCheckpoint() increments the BlockingCounter, potentially causing waits/timeouts. Consider ensuring checkpoints are either always decremented on closing v2 connections or SendCheckpoint only increments the counter if the message is guaranteed to be queued/processed.
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
Pull request overview
Updates facade::Connection to better integrate the experimental IoLoopV2 path by avoiding the legacy AsyncFiber wake/launch logic and instead waking IoLoopV2 directly and draining control-path (dispatch_q_) messages inside IoLoopV2.
Changes:
- Gate AsyncFiber-related behaviors (launch/wake/migration handling) when
ioloop_v2_is enabled. - Add
dispatch_q_wake condition toIoLoopV2and drain/flush control messages from withinIoLoopV2. - Disable
RequestAsyncMigration()whenioloop_v2_is enabled.
You can also share your feedback on Copilot code review. Take the survey.
| } | ||
|
|
||
| // TODO: Properly handle backpressure unblocking and flusing | ||
| reply_builder_->Flush(); |
| if ((!force && !migration_enabled_) || cc_ == nullptr || ioloop_v2_) { | ||
| return; | ||
| } | ||
|
|
Signed-off-by: Vlad <vlad@dragonflydb.io>
| auto msg = std::move(dispatch_q_.front()); | ||
| dispatch_q_.pop_front(); | ||
| // Temporary; ProcessAdminMessage logic doesn't apply here (migrations for example) | ||
| std::visit(AsyncOperations{reply_builder_.get(), this}, msg.handle); |
There was a problem hiding this comment.
How about setting here the async_dispatch flag (with a cleanup)?
if (!dispatch_q_.empty()) {
cc_->async_dispatch = true;
absl::Cleanup reset = [this] { cc_->async_dispatch = false; };
while (!dispatch_q_.empty()) {
auto msg = std::move(dispatch_q_.front());
dispatch_q_.pop_front();
std::visit(AsyncOperations{reply_builder_.get(), this}, msg.handle);
UpdateDispatchStats(msg, false /* subtract */);
}
}
|
|
||
| // TODO: Properly handle backpressure | ||
| GetQueueBackpressure().pubsub_ec.notifyAll(); | ||
| continue; |
There was a problem hiding this comment.
Here we jump back to the beginning of the while loop, giving the dispatch queue strict priority over command parsing and execution. This may cause starvation - especially with a high volume of pubsub messages, the parsed command queue might never be served.
At a minimum, I suggest adding a TODO comment here to acknowledge the issue for a future task, or fixing it on the spot as part of this PR.
Second part of #6870, testing on CI
Implement very basic admin message processing for ioloopv2