Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle invalid server response? #246

Closed
ronag opened this issue Jul 10, 2020 · 17 comments · Fixed by #250
Closed

How to handle invalid server response? #246

ronag opened this issue Jul 10, 2020 · 17 comments · Fixed by #250
Labels
question [Use a Discussion instead]

Comments

@ronag
Copy link
Member

ronag commented Jul 10, 2020

Consider the following:

const net = require('net')

const socket = net.connect(80, 'feeds.bbci.co.uk')
  .on('data', buf => {
    process.stdout.write(buf)
  })
  
socket.write('GET /news/rss.xml HTTP/1.1\r\n')
socket.write('host: feeds.bbci.co.uk\r\n')
socket.write('connection: keep-alive\r\n')
socket.write('transfer-encoding: chunked\r\n\r\n')

socket.write('7\r\n')
socket.write('Mozilla\r\n')
socket.write('9\r\n')
socket.write('Developer\r\n')
socket.write('7\r\n')
socket.write('Network\r\n')
socket.write('0\r\n')
socket.write('\r\n')

Running this will give two (?!) replies from the server, one HTTP/1.1 200 OK followed by a HTTP/1.0 400 Bad Request.

Seems like the server for whatever reason is parsing the above as two request and provides two responses. This only happens with chunked transfer encoding against this specific (nginx + AkamaiGHost?) server.

I have no idea why it does this.

In undici this will cause either a hard crash if nothing is in the pipeline, or start replying to the next pipelined request, essentially corrupting state.

I'm not sure how to handle this. Any thoughts?

@ronag ronag added the question [Use a Discussion instead] label Jul 10, 2020
@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

Do we know anyone that might have any good suggestion on this?

@szmarczak
Copy link
Member

I think the server doesn't accept GET bodies and assumes the next data is the next request. And since it isn't, it replies with 400.

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

I see. So that's a server bug I guess. Not sure how to handle though. At least when pipelining.

Maybe add a note in the docs for users?

@szmarczak
Copy link
Member

No, that's not a server bug. The spec doesn't say whether it's allowed to have a GET body or not:

A payload within a GET request message has no defined semantics;
sending a payload body on a GET request might cause some existing
implementations to reject the request.

Unfortunately there's no way to tell us if the server acknowledged the body or not (or the 400 response is a mistake).

One workaround would be to require a content-length header when sending GET or HEAD.

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

One workaround would be to require a content-length header when sending GET or HEAD.

That's a pretty good idea. I think elasticsearch uses a body for GET requests. But I guess chunked encoding would not be required there? @delvedor?

@szmarczak
Copy link
Member

I think elasticsearch uses a body for GET requests.

They accept POST requests as well.

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

They accept POST requests as well.

Yes, what I mean is that we don't want to break them since they have GET with body as part of their API.

@delvedor
Copy link
Member

@ronag correct, Elasticsearch does accept GET request with bodies. But unless the user enforces the GET method, the client will send all of them via POST.

A good solution might be to set the content-length header only if a body is present.
From my experience having the content-length set to 0 in GET request without a body could cause rejections as well.

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

So content-length should not be included, at all, when 0? Including other methods? Or just for GET?

@delvedor
Copy link
Member

I would say it should not be included for all methods that might accept a body, but are not sending it in that case.
So GET, DELETE, OPTION (not sure about others!).

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

A user agent SHOULD NOT send a
Content-Length header field when the request message does not contain
a payload body and the method semantics do not anticipate such a
body.

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

@delvedor 4de85d4 (#249)

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

@szmarczak I'm still unsure how to handle this. I've made sure we don't send content-length: 0 for e.g. GET requests. However, it is this possible and valid for users to provide a body with a GET request and any such requests against that server will end up corrupting undici state.

The only "fix" I can think of is to not allow body with GET but that possibly breaks e.g. elasticsearch (@delvedor)... though I guess that's still better than corrupting undici state...

Thoughts?

@szmarczak
Copy link
Member

IMO enforcing content-length should do the trick.

@ronag
Copy link
Member Author

ronag commented Jul 10, 2020

IMO enforcing content-length should do the trick.

What do you mean? Even with content length the above example will fail with double responses.

@delvedor
Copy link
Member

If we can’t find a workaround is totally fine for me disallowing body in GET requests :)

@szmarczak
Copy link
Member

What do you mean? Even with content length the above example will fail with double responses.

Ah, I just have tested \r\n\r\n and you're right. Unfortunately I have no idea how to fix this. Maybe let's just ignore invalid responses? But then if an attacker takes over a server then he may flood us 🤔 Anyway I think it's almost 0% chance the attack would ever happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question [Use a Discussion instead]
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants