After unrescue a instance,del a detached rbd volume will go wrong
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Incomplete
|
Undecided
|
Unassigned |
Bug Description
It's not the rescue image volume,it‘s the instance attach volume(ceph rbd device,maybe vdb) before rescue.
After unrescue the instance,if try to del a detached the volume vbd will go wrong.
Specifically:
1.the ceph rdb watchers is still watcher
use the cmd-line: rbd status volumes/volume-uuid you will see that .
When in this state,the rbd cannot be rm
2.the database was updated the volumes info,status-
3.the instance still can see and use this rbd ,you can format、mount and r/w the disk.But if you reboot the instance , this disk will disappear.
the tempest case and error report:
tempest.
ERROR cinder.
Reproduce the steps:
1.nova volume-attach instance-uuid volume-uuid
2.nova rescue --password admin --image image-uuid instance-uuid
3.nova unrescue instance-uuid
4.nova volume-detach instance-uuid volume-uuid
and you can see the ceph rdb status and check the rbd still watcher or not
5.if the rbd still watcher ,any operation to del the volume , will fail。
tags: | added: ceph liberty rdb |
I found the same bug report in here: /bugzilla. redhat. com/show_ bug.cgi? id=1303549
https:/
Same phenomenon,and almost same soft version,but According to his steps,I was failed to reproduce that.