NGINX "client intended to send too large chunked body" error

Bug #1793826 reported by Jon Watte on 2018-09-21
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Nginx
Undecided
Unassigned

Bug Description

When we use NGINX for ingress to GRPC using HTTP2, after receiving a certain number of bytes, we get the error "client intended to send too large chunked body" and a disconnect.
(We do HTTPS ingress on the same port, and use a path-based route to send the GRPC requests to the right place.)

Looking at the nginx source, in ngx_http_v2.c, function ngx_http_v2_filter_request_body, we see an error when a cumulative counter named rb->received exceeds the value of clcf->client_max_body_size (and that limit is not 0.) "client intended to send too large chunked body"

Further down in the source, we see the code say b->flush = r->request_body_no_buffering

It seems to me as if the reason this config variable is there, is to avoid infinite buffering in the proxy eating all RAM. But for a streaming connection, like HTTP2, that's not a problem.

When buffering is turned off, NGINX should either test the amount in buffer (rather than the amount streamed through,) OR it should not check this at all.

The error message can be turned off by setting client_max_body_size to 0, and that's a workaround, but I think it would be better if NGINX knew the nature of the connection.

Thomas Ward (teward) wrote :

Hello.

This team on Launchpad is not the place to report upstream bugs and issues, unfortunately. Please file those bugs and reports on https://trac.nginx.org/nginx/ - the NGINX Upstream tracker.

Changed in nginx:
status: New → Triaged
status: Triaged → New
Jon Watte (jwatte) wrote :

I wish that link came up higher on the Google results!
Thanks.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers