-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify why we want TLSv1.3 #37
Comments
It would also be good to describe whether the certificates for TLSv1.3 could be self-signed, or must be signed by a CA. I know this was discussed a while back (on the RPM list?) but don't remember the outcome. If the spec permits an RPM client to accept an unsigned certificate (perhaps noting the server certificate is unsigned), implementing an RPM server becomes a lot easier if it's not necessary to figure out all the Let's Encrypt machinery for my 192.168.1.1 router at home. Thanks. |
After we talked about this, I 100% agree that a description of why we want TLS 1.3 would be great to have in the draft. |
This should be left to the implementation's choice. It is not part of the wire-format or the methodology. Ultimately, there is no difference between a self-signed or a CA-signed cert. The only difference is whether the client decides to trust the root or not. |
So as far as justification, at the time of writing this spec, it seems fine to rationalize it as the majority of Web traffic is using modern TLS - just like the use of HTTP/2 is rationalized. This spec shouldn't limit itself to TLS 1.3, to avoid preventing future changes. But is doesn't need to worry itself with older versions or tie itself in knots coming up with rationale. For instance. According to figures on radar.cloudflare.com today, globally over the last 7 days 67% of traffic is using TLS 1.3, and 25 % is using QUIC's handshake (based on TLS 1.3). The remainder uses TLS 1.2. |
Good point @LPardue - we should say that TLS 1.3 and later should be used. The reason why we end up fixating on TLS 1.3 is because it is possible to easily count the rounds with TLS 1.3. We know that the handshake is always 1 round-trip. With TLS 1.2, it may be 1 or 2 round-trips. |
As I have been iterating over this, I am trending towards actually not caring about TLS handshake latency. Here is my reasoning: Nowadays, we try to reuse connections as much as possible. TLS handshake latency is not relevant then. Also, we are focusing on measuring responsiveness on the load-generating connections. Thus, again TLS handshake latency is irrelevant. Also, by reusing connections for the latency-probes it is possible to send the probes continuously from the start of the load-generation. I am able to get much more data-points that way and thus a more stable result. So, I tend to remove the notion of TLS-probes from the draft. Any strong opinions? |
A bunch of semi-related thoughts come to mind:
|
Depends on the goal, I guess :) I think for "responsiveness under working conditions" it is less important.
The question becomes whether we expect latencies to be different for TLS vs H2 req/resp. And that depends entirely on the network (e.g., a transparent TCP proxy inspecting the TLS client-hello's SNI could end up delaying TLS quite a bit).
It depends on the TLS version. Which is why we mandate(d) 1.3 (see my comment above).
Yes, looking at it from the "Transparent TCP proxy" perspective which are very popular in cellular networks, it makes sense to measure TLS.
Agreed. But the method was never said to be "the universal and only way to measure latency". I was/still am hoping that we are going to converge.
Variance has always been a problem. I am trying to increase the sample-size without causing the test to run much longer. Right now, I seem to achieve that goal, by starting the measurement not at the moment we reach saturation but rather sending a probe every 100ms from the beginning on. I then take the 90th percentile and the average among the latency on the load-generating and the separate connections. Numbers are fairly stable now. If we want to bring TLS into that, it would mean that instead of reusing connections for my probes on the separate connections, I would create new ones. I will experiment with that.
Agreed, we need to converge as early as possible. macOS Ventura will have the latest and greatest.
|
As I read my reply to you, @richb-hanover I realize how at the beginning I was mostly convinced that dropping TLS is good and then moved more and more towards keeping TLS ;-) As you can see, I'm quite split on this 😅 |
And I'm always happy to help muddy the waters here :-) Thanks. |
After more discussions and experimentations, it is best to keep the full handshake. Weighting of the values is still an open question. With the current approach, we get 4 sets of data: separate_tcp, separate_tls, separate_h2, load_generating_h2. For the 2 types of probes. The ones on separate connections (for these we get TCP, TLS and H2 data), and the load-generating connections (for this one we only get H2 data). From these 4 data-sets we take the 90th percentile. So, we have 4 values separate_tcp_p90, separate_tls_p90, separate_h2_p90, load_generating_h2_p90. Suggestion would be to average these in the following way:
We can also increase the weight towards H2:
|
af69ae2 please reopen if that's not sufficient. I removed the reference to TLS v1.3 and rather explain what the TLS-handshake latency is. It is a calculation of latency per round-trip during the TLS handshake phase. If the TLS-version being used requires 2 round-trips before the client can transmit data, then the latency needs to be divided by 2. |
Feedback from Bjorn Mork (https://lists.bufferbloat.net/pipermail/rpm/2022-March/000165.html)
We should explain why TLSv1.3 is required.
The text was updated successfully, but these errors were encountered: