Conversation
Claude claims to have figured out what the deal is: "This code WAS reachable before commit 661ec77 ("Fix race on ASIO client during prevote and vote"), which fixed a bug where vote/prevote requests didn't honor busy_flag_, allowing concurrent sends on the same client. After that fix, all paths properly serialize, making send_retry unreachable." |
|
some ai review results https://gist.github.com/filimonov/21cf71ee146db9f2c6ddf6dd1b7297b4 |
I thought about it too, but
I'll just add a comment explaining that error callbacks may happen out of order or in parallel, if that's ok. Other comments seem wrong. |
| ptr<buffer> resp_buf(buffer::alloc(RPC_RESP_HEADER_SIZE)); | ||
| aa::read( ssl_enabled_, ssl_socket_, socket_, | ||
| asio::buffer(resp_buf->data(), resp_buf->size()), | ||
| std::bind( &asio_rpc_client::response_read, |
There was a problem hiding this comment.
imagine a world where lambdas exist
There was a problem hiding this comment.
Meh, seems good that the functions have names and are listed sequentially rather than nested inside each other.
Running #101 with tsan revealed that enabling proper streaming on the server side made the client hit data races. Looking closer at the client code, it seems sloppy and incorrect in a few ways. Streaming mode was retrofitted to it without enough thought. This PR half rewrites it.
when_donecallback can be called twice.But this PR also removes some features:
asio_rpc_clientis meant to remain usable if some request failed in noncritical way (crc mismatch orread_resp_meta_callback returned false). In that case it called a function calledclose_socket, but that function doesn't actually close socket, so theasio_rpc_clientwould probably still work; except in streaming mode, where it would probably get stuck (because we wouldn't send next requests from the queue). This PR makesasio_rpc_clientconsistently enter "abandoned" state on any error.