-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grpc-uds: update to h2 v0.3.26 #3
Conversation
It also returns true for I/O errors and local GO_AWAYs.
When Streams::recv_go_away is called, Recv::handle_error is called on every stream whose stream id is past the GO_AWAY's last stream id, and those streams may have been pending-accepting. If the stream had not encountered an error before, Recv::handle_error then sets its state to State::Closed(Error::GoAway(_, _, Initiator::Remote)) which makes State::is_remote_reset return true in Streams::next_incoming, which leads to Counts::dec_remote_reset_streams being called even though Counts::inc_remote_reset_streams was never called for that stream, causing a panic about the counter being 0.
…est` function return a Result Signed-off-by: Michael Rodler <[email protected]> Reviewed-by: Daniele Ahmed <[email protected]>
Signed-off-by: Michael Rodler <[email protected]> Reviewed-by: Daniele Ahmed <[email protected]>
Signed-off-by: Michael Rodler <[email protected]>
* Revert "fix panic on receiving invalid headers frame by making the `take_request` function return a Result" This reverts commit 66c36c4. * proper fix for the panic in server receiving a request with a :status pseudo-header in the informational range of status codes --------- Signed-off-by: Michael Rodler <[email protected]> Co-authored-by: Michael Rodler <[email protected]> Co-authored-by: Daniele Ahmed <[email protected]>
…dow size (hyperium#692) Removed the SubAssign, etc. syntactic sugar functions and switched to return Result on over/underflow Whenever possible, switched to returning a library GoAway protocol error. Otherwise we check for over/underflow only with `debug_assert!`, assuming that those code paths do not over/underflow. Signed-off-by: Michael Rodler <[email protected]> Signed-off-by: Daniele Ahmed <[email protected]> Co-authored-by: Michael Rodler <[email protected]> Co-authored-by: Daniele Ahmed <[email protected]>
- fix of the test's name after changing it in hyperium#634 - additional test that server also sends data frames correctly
* ci: only check h2 package for msrv * msrv: bump to 1.63, tokio requires it
There was a bug where opening streams over the max concurrent streams was possible if max_concurrent_streams were lowered beyond the current number of open streams and there were already new streams adding to the pending_send queue. There was two mechanisms for streams to end up in that queue. 1. send_headers would push directly onto pending_send when below max_concurrent_streams 2. prioritize would pop from pending_open until max_concurrent_streams was reached. For case 1, a settings frame could be received after pushing many streams onto pending_send and before the socket was ready to write again. For case 2, the pending_send queue could have Headers frames queued going into a Not Ready state with the socket, a settings frame could be received, and then the headers would be written anyway after the ack. The fix is therefore also two fold. Fixing case 1 is as simple as letting Prioritize decide when to transition streams from `pending_open` to `pending_send` since only it knows the readiness of the socket and whether the headers can be written immediately. This is slightly complicated by the fact that previously SendRequest would block when streams would be added as "pending open". That was addressed by guessing when to block based on max concurrent streams rather than the stream state. The fix for Prioritize was to conservatively pop streams from pending_open when the socket is immediately available for writing a headers frame. This required a change to queuing to support pushing on the front of pending_send to ensure headers frames don't linger in pending_send. The alternative to this was adding a check to pending_send whether a new stream would exceed max concurrent. In that case, headers frames would need to carefully be reenqueued. This seemed to impose more complexity to ensure ordering of stream IDs would be maintained. Closes hyperium#704 Closes hyperium#706 Co-authored-by: Joe Wilm <[email protected]>
As discussed in hyperium#711, the current implementation of sending data is suboptimal when vectored I/O is not enabled: data frame's head is likely to be sent in a separate TCP segment, whose payload is of only 9 bytes. This PR adds some specialized implementaton for non-vectored I/O case. In short, it sets a larget chain threhold, and also makes sure a data frame's head is sent along with the beginning part of the real data payload. All existing unit tests passed. Also I take a look at the e2e https://github.com/hyperium/hyper/blob/0.14.x/benches/end_to_end.rs but realize that all the benchmarks there are for the case of vectored I/O if the OS supports vectored I/O. There isn't a specific case for non-vectored I/O so I am not sure how to proceed with benchmark for performance evaluations.
* Update indexmap to version 2 * Update webpki-roots dev-dep to 0.25 * Update tokio-rustls dev-dep to 0.24 * Update env_logger dev-dep to 0.10 * Remove combined minimal-versions + MSRV check for now
This PR changes the the assign-capacity queue to prioritize streams that are send-ready. This is necessary to prevent a lockout when streams aren't able to proceed while waiting for connection capacity, but there is none. Closes hyperium/hyper#3338 Co-authored-by: dswij <[email protected]>
This change causes GOAWAYs to be issued to misbehaving connections which for one reason or another cause us to emit lots of error resets. Error resets are not generally expected from valid implementations anyways. The threshold after which we issue GOAWAYs is tunable, and will default to 1024.
This speeds up loading blocks in cases where we have many headers already.
Calculate the amount of allowed CONTINUATION frames based on other settings. max_header_list_size / max_frame_size That is about how many CONTINUATION frames would be needed to send headers up to the max allowed size. We then multiply by that by a small amount, to allow for implementations that don't perfectly pack into the minimum frames *needed*. In practice, *much* more than that would be a very inefficient peer, or a peer trying to waste resources. See https://seanmonstar.com/blog/hyper-http2-continuation-flood/ for more info.
The remaining failing test seems to be from https://github.com/summerwind/h2spec complaining that we're slightly too permissive when interpreting requests (which is kinda the whole point of the patch). I think we'll just have to live with that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy with this after seeing the explanation about the failing test.
* chore(deps): update rust crate h2 to v0.3.26 [security] * Update forked h2 * stackabletech/h2#3 was merged * Use h2 version from workspace --------- Co-authored-by: Natalie Klestrup Röijezon <[email protected]>
Do not squash-merge, required for stackabletech/listener-operator#184