[SRU] rbd calls block eventlet threads
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
High
|
Ivan Kolodyazhny | ||
Ubuntu Cloud Archive |
Invalid
|
Medium
|
Matt Rae | ||
Kilo |
Fix Released
|
Medium
|
Unassigned |
Bug Description
[Impact]
cinder-volume's rbd driver makes a call out to rbd it does not yield to eventlet, thus blocking all other processing. When this happens any pending requests are stuck unacknowledged in the rabbit queue until the current rbd task completes. This results in an unresponsive cloud presented to the user and actions such as instance creation failing due to nova timing out waiting on cinder.
[Test Case]
Steps to reproduce:
1: Create a volume that will take more than an instant to delete.
2: Delete the volume
3: Immediately attempt to create some volumes
Expected results:
Volumes create in a timely manner and become available
Volume delete processes and delete finishes in parallel
[Regression Potential]
This patch moves all rados calls to a separate python thread which
+doesn't block eventlet loop.
====
When cinder-volume's rbd driver makes a call out to rbd it does not yield to eventlet, thus blocking all other processing.
When this happens any pending requests are stuck unacknowledged in the rabbit queue until the current rbd task completes. This results in an unresponsive cloud presented to the user and actions such as instance creation failing due to nova timing out waiting on cinder.
Requirements to reproduce:
1: Ceph set up with a rbd backend
2: A single ceph-volume worker to prevent the distributed nature from masking the problem
3: A method of creating a large volume, writing to it
Steps to verify volume will trigger issue on delete:
1: Get the UUID of the volume you have created and dirtied
2: Use the rbd command on your ceph cluster to delete the volume and verify it takes a couple minutes to delete.
3: Delete the volume in cinder to cleanup cinder's database.
Steps to reproduce:
1: Create a volume that will take more than an instant to delete.
2: Delete the volume
3: Immediately attempt to create some volumes
Expected results:
Volumes create in a timely manner and become available
Volume delete processes and delete finishes in parallel
Actual results:
Volumes creations are processed after the delete has finished
Volume delete blocks threads and must process first
As RBD commands consume a fair amount of CPU time to process we should not just background the RBD commands as that would represent a DoS risk for the cinder-volume hosts.
One possible way to fix this would be to implement at least 2 queues that control the spawning of threads, reserving x of y threads for time sensitive and fast tasks.
tags: | added: ceph rbd |
Changed in cinder: | |
importance: | Undecided → High |
Changed in cinder: | |
assignee: | nobody → Sachi King (nakato) |
Changed in cinder: | |
assignee: | Sachi King (nakato) → nobody |
tags: | removed: rbd |
Changed in cinder: | |
status: | New → Confirmed |
Changed in cinder: | |
assignee: | Eric Harney (eharney) → Ivan Kolodyazhny (e0ne) |
Changed in cinder: | |
status: | In Progress → Fix Committed |
Changed in cinder: | |
milestone: | none → liberty-2 |
status: | Fix Committed → Fix Released |
Changed in cinder: | |
milestone: | liberty-2 → 7.0.0 |
description: | updated |
tags: | added: sts-sponsor |
no longer affects: | cloud-archive |
description: | updated |
tags: | removed: sts-sponsor |
Changed in cloud-archive: | |
status: | New → Triaged |
assignee: | nobody → Matt Rae (mattrae) |
importance: | Undecided → Medium |
description: | updated |
summary: |
- rbd calls block eventlet threads + [SRU] rbd calls block eventlet threads |
tags: | added: sts sts-sru-needed |
Thanks for reporting, this looks similar (not necessarily duplicate of) bug 1397264. It seems CephRBD python wrapper have a lot ot room to improve performance.