Client may hold socket open after ChunkWriteTimeout

Bug #1572719 reported by Chris Auston on 2016-04-20
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
OpenStack Security Advisory

Bug Description

You can reproduce this by issuing a GET request for a few hundred MB file and never consuming the response, but keep the client socket open. Swift will log a 499 but the socket does not always close.

ChunkWriteTimeout is meant to protect the proxy from a slow reading client:

Sometimes when this exception is thrown there is still data in the process socket buffer, so when eventlet tries to close the socket it first flushes it:

The problem is that if the client is not consuming the socket buffer then that flush will wait forever; it's trying to write on a socket that just threw a timeout trying to write! The flush write is not protected by any timeout.

As far as I can tell, this WRITE_TIMEOUT does nothing:

wsgi.server() takes a socket_timeout that might be what we're after?

Even with socket_timeout, eventlet needs to be patched. This should be in a finally block:

All of this is probably mitigated by most operators setting an idle timeout in their load balancers, but I wanted to report it. Going directly to a proxy I was able to hold sockets open for long periods of time.

I did the initial research on version 2.2.2 but I was able to reproduce on 2.7.0. I'm trying to translate links to master branch on github. I apologize in advance if it's not quite right.

Jeremy Stanley (fungi) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

description: updated
Changed in ossa:
status: New → Incomplete

I've subscribed Victor to help us investigate/confirm the eventlet issue. If this can't be entirely fixed with-in swift source code, we'll need an eventlet fix and ideally wait for new releases of the eventlet library.

Note that it is similar to bug 1466549, but instead of closing the connection quickly you keep it open without reading it... Though it's not obvious if this need to be fixed in swift source code...

@swift-coresec please triage this bug report.

clayg (clay-gerrard) wrote :

It seems likely the fix is to move the connection.close into the finally block in eventlet.wsgi.HttpProtocol.finish

We should be able to test this by implementing the fix in a subclass since HttpProtocol is already over-ride-able in wsgi.server:

If we fix it in our code it's a) backportable but b) disclosing the security issue in eventlet

Does anyone know how to reach out to the eventlet team regarding some sort of shared disclosure?

Clayg, it seems like eventlet is using github to track issues, and afaik it doesn't support private report. Thus we could either contact the main contributors by mail to coordinate a new release of eventlet, or we could also make this bug report public and figure this out in the open.

Victor Stinner (victor-stinner) wrote :

"Thus we could either contact the main contributors by mail to coordinate a new release of eventlet, or we could also make this bug report public and figure this out in the open."

Having an embargo makes contribution harder, I would prefer to use a regular and public eventlet issue to benefit of their Travis CI.

Grant Murphy (gmurphy) wrote :

It would be nice to have a concrete mitigation before fixing in the open. Has anybody had a chance to explore this statement a little more?

"All of this is probably mitigated by most operators setting an idle timeout in their load balancers.."

I subscribed eventlet top contributors (Sergey Shepelev and Jakub Stasiak). It seems like eventlet is unable to close a socket if it has un-read data and this can be used to cause resources exhaustion for daemon like swift.

Sergey Shepelev (temoto) wrote :

Hello, Eventlet here.
Thanks for contacting.
I'm writing a public issue about this in our Github project, without mentioning Swift or any other projects.
Fix is ready, it's a bit hard to make a test, then I'll upload new release with this fix.

That's very appreciated Sergey, thank you!

@swift-coresec, do we want to implement the swift work-around as a swift subclass or should we wait for a requierements bump to benefit from the eventlet fix ?

Anyway, once eventlet is patched, do you mind if we make that bug public ?

clayg (clay-gerrard) wrote :

I think this is medium impact at best - it's very much like many other slow resource exhaustion from ill behaved clients or bad requests we've squashed in the past. Generally these are mitigated in public environments with monitoring and alerting - although I'm sure they can be annoying!

Once the eventlet issue is public - and especially following a release - this should of course become public!

I'd prefer to eat the upstream version rev than to try and implement a hacky work-around in our tree.

... IMHO

Sergey Shepelev (temoto) wrote :

I tried and then I tried again but was unable to construct a test case that would actually leak file descriptor. It's all this silly line to blame: And while moving that `shutdown+close` under `finally` actually seems a good idea, but is blurred by the fact that there is no internal use case to explain that change.

So my guess: somewhere somehow like `class ChunkWriteError(Timeout)`, the actual `socket.timeout` mutates to another exception which is not expected by Eventlet wsgi server.

Taking this long proves that these problems are at the limit of my cognitive capacity, so if more clever people see that this issue could actually be cured at Eventlet level, I'm glad to release fix.

Otherwise, I suggest adding another parent class: `socket.timeout` to your chunk errors or explicitly catch chunk errors and force socket close.

Chris Auston (cfauston) wrote :

It's been awhile since I looked at this but IIRC the ChunkWriteTimeout triggers a GeneratorExit or two in proxy_logging middleware and probably _get_response_parts_iter. I can't quite remember the call sequence, but that leads to wsgi trying to close the client socket and triggering the flush.

When I used wsgi.server(..., socket_timeout=30, ...) a socket.timeout was thrown but still didn't close the socket which was why I suggested the finally block.

Based on comment#12, it seems like this bug can be safely made public. If nobody object, let's do this by the end of the week.

Thank you Sergey for the investigation!

Opening this bug based on above comments

description: updated
tags: added: security
information type: Private Security → Public

I was able to reproduce that issue on Mitaka (openstack-swift-2.7.0-2.el7.noarch and python2-eventlet-0.17.4-4.el7.noarch) using python socket module to open many GET connections without consuming the response.

Extracts from swift logs:
proxy-server[14729]: ERROR with Object server re: Trying to GET /v1/AUTH_e90d6d7d952f4a6d9933047b461015ff/test/large_file: Timeout (10.0s) (txn: tx706ead3ee0d64af2ba382-0057f4691c)
proxy-server[14729]: Object GET returning 503 for [] (txn: tx706ead3ee0d64af2ba382-0057f4691c) (client_ip:

After enough connections have failed with this 503 errors (about 8000 for my all-in-one instance), all swift proxy-server processes are unresponsive until the offending client is terminated, or the service restarted.

clayg (clay-gerrard) wrote :

this is probably *related* to lp bug #1568650 - but not exactly the same.

The same repo script [1] with a significantly slow enough client (> 60 seconds by default) will trigger the ChunkWriteTimeout. The script as written just simulates a really *slow* client; not necessarily bad behaved and trying to wreck things - but with high enough sleep you could simulate a Slowloris/DOS if that's what you're going for.

In my most recent test the proxy seemed to square it's backend connections, but the socket to the client remained established.


Changed in swift:
status: New → Confirmed
importance: Undecided → High

Having another look at that issue, it sounds like slow client shouldn't be handled by OpenStack services but rather with a load balancer, especially if the service is Internet facing.

I initially thought the problem would happen between the proxy and the object server even after the client got disconnected, but this is not the case since all the sockets are effectively released when the client is disconnected.

Since there is already a Security Note in the process to cover load balancers usage in front of OpenStack service, I suggest we close the ossa task and triage this report as a class D according to the VMT taxonomy ( )

Jeremy Stanley (fungi) wrote :

I agree with this being class D (security hardening opportunity). Since the report is already public, we can just go ahead and switch the security advisory task to won't fix, then change it back later if there are objections. I've done so now.

Changed in ossa:
status: Incomplete → Won't Fix
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers