Save to Disk in Bio Thread - draft #1784
Draft
+321
−49
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Introduction
This PR introduces a new feature that enables replicas to perform disk-based synchronization on a dedicated background thread (Bio thread). Benchmarking results demonstrate significant improvements in synchronization duration. In extreme cases, this optimization allows syncs that would have previously failed to succeed.
This is an early draft pull request, as requested by the maintainers, to allow for review of the overall structure and approach before the full implementation is completed.
Problem Statement
Some administrators prefer the disk-based full synchronization mode for replicas. This mode allows replicas to continue serving clients with data while downloading the RDB file.
Valkey's predominantly single-threaded nature creates a challenge: serving client read requests and saving data from the socket to disk are not truly concurrent operations. In practice, the replica alternates between processing client requests and replication data, leading to inefficient behavior and prolonged sync durations, especially under high load.
Proposed Solution
To address this, the solution offloads the task of downloading the RDB file from the socket to a background thread. This allows the main thread to focus exclusively on handling client read requests while the background thread handles communication with the primary.
Benchmarking Results
Potential for Improvement
In theory, this optimization can lead to unbounded improvement in sync duration. By eliminating competition between client read events and socket communication (i.e., events related to handling RDB download with the primary), sync times become independent on load - the main thread handles only client reads, while the background thread focuses on primary RDB download events, allowing the system to perform consistently even under high load.
The full valkey-benchmark commands can be found in the appendix below.
Sync Duration with Feature Disabled (times in seconds)
16 threads, 64 clients: 172 seconds
32 threads, 128 clients: 436 seconds
48 threads, 192 clients: 710 seconds
Sync Duration with Feature Enabled (times in seconds)
16 threads, 64 clients: 33 seconds (80.8% improvement)
32 threads, 128 clients: 33 seconds (92.4% improvement)
48 threads, 192 clients: 33 seconds (95.3% improvement)
Alternative Solutions Considered
IO Threads
IO threads to not have an advantage over Bio in this case: The save-to-disk job is rare (most likely no more than several executions in a replica's lifetime), and there is never more than one simultaneous execution. Bio threads make more sense for a single, slow long running operation.
io_uring
For a single connection, io_uring doesn't provide as much of a performance boost because the primary advantage comes from batching many I/O operations together to reduce syscall overhead. With just one connection, we won't have enough operations to benefit significantly from these optimizations.
Prioritizing primary's socket in the event loop
This approach would help, but less effectively than using a Bio thread. We would still need to allocate attention to handling read requests, which could limit its benefit. It could be more useful on smaller instance types with limited CPU cores.
Appendix:
Benchmarking Setup