Manage existing volume failures does not clean quota usage
Bug #1847791 reported by
Tobias Urdin
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
We are using the openstacksdk and are trying to manage an existing volume in our ceph (rbd backend).
The manage fails in cinder-volume because the image does not exist in backend which is expected.
http://
The issue here is that the quota becomes wrong and it increases the `volumes` and `volumes_ssd` (assuming since our volume type is named ssd) resources in the `quota_usages` database table but when we issue a delete on the failed volume this quota is not released.
This causes the quota to be full when doing this over and over, I've not been able to pin-point it to an exact location where it doesn't remove the quota (or why).
To post a comment you must log in.
Cinder version is rocky cinderclient- 4.0.1-1. el7.noarch cinder- 13.0.0- 1.el7.noarch cinder- 13.0.0- 1.el7.noarch
python2-
python-
openstack-