Ceph volumes attached to local deleted instance could not be correctly handled
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Undecided
|
Unassigned | ||
OpenStack Compute (nova) |
Invalid
|
Undecided
|
Unassigned |
Bug Description
How to reproduce:
1. Launch an instance.
2. Create a volume with ceph backend.
3. Attach the volume created in step 3.
4. Kill nova-compute
5. Delete the instance, this will go to local_delete
6. Check volumes status using "cinder list", the volume is in "available" status
7. Try to delete the volume, failed:
2017-03-14 11:40:41.050 DEBUG oslo_messaging.
15:05 2017-03-14 11:40:41.056 DEBUG cinder.coordination req-774b4680-
15:05 2017-03-14 11:40:41.155 DEBUG cinder.
15:05 2017-03-14 11:40:42.376 DEBUG cinder.
15:05 2017-03-14 11:40:42.377 DEBUG cinder.
15:06 2017-03-14 11:40:42.382 DEBUG cinder.
15:06 2017-03-14 11:40:42.570 DEBUG cinder.utils req-774b4680-
...
15:07 2017-03-14 11:41:12.950 WARNING cinder.
15:07 2017-03-14 11:41:12.955 ERROR cinder.
summary: |
- Volumes attached to local deleted ceph volume could not be correctly + Ceph volumes attached to local deleted instance could not be correctly handled |
description: | updated |
affects: | nova → cinder |
affects: | cinder → nova |
Add the cells tag, as the work has added lots of new code paths for local delete cases. Seeing errors around inconsistent handling of quotas, notifications and other things that hit this code path.