-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fairer stream write scheduling [#125]. #475
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #475 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 25 25
Lines 4949 4961 +12
=========================================
+ Hits 4949 4961 +12 ☔ View full report in Codecov by Sentry. |
Thanks so much for tackling this, it's long overdue! I'm going to have to re-read this, as I'm not sure what happens in certain situations. For instance if you have a mix of streams being discarded (finished) and another stream which hits an exception.. won't we end up keeping the discarded stream in the queue? I'm also wondering whether |
I started with deque but got rid of it as it wasn't really adding any value in the final way I did things, so I went with a plain list. Good point re the exception handling; I thought I had it covered but didn't think of that case properly. I suppose we can record discarded and do a try: ... finally: that prunes the discards always |
b76b1b3
to
8e84a67
Compare
I fixed the exception handling. I also tested again, with bigger transfers and with qlog turned off. The new code is actually a bit faster, so I'm not worried about performance any more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, thanks so much!
* Backport upstream "fairer stream write scheduling" PR 475 Credit to aiortc#475 * 🔖 Bump to 0.15.1 * 🔥 cleanup in setup.py * ✔️ Add sdist preparation in CI and add aarch64 in build matrix * 📝 Add CHANGES * 📝 Update README.rst upstream is "more" alive but the main issues raised at the start mostly still applies. * 🔧 enforce sdist as completed before checksum
This greatly improves multiplexing and stream fairness in the use case from #125 with no significant cost (the run I'll show is slightly slower, but I think that could be measurement noise and if not it is small). The idea is that we keep an ordering for the streams and every time we generate datagrams to send, we recompute the list, moving anyone who got stream data transmitted to the end.
All the tests pass for me. There seems to be an intermittent loss of one line of coverage related to qlog that I don't understand; we'll see if it shows up in the CI.
Here the is before case, in the terminal and qvis:
And now the much nicer after case: