I have tested this issue on latest master and it is reproducible by admin user as well i.e., volumes are remaining in 'attaching' and 'detaching' states by admin user and normal user after running nova volume-update command.
In nova patch, attach and detach calls are moved from swap_volume() method of nova.compute.manager to migrate_volume_completion() method of cinder.volume.manager.
IMO, while fixing this cinder migrate attach volume issue, nova volume-update was not tested properly.
When we migrate volume which is attached to the instance, cinder calls update_server_volume() method of nova from cinder.volume.manager, which internally calls swap_volume() method of nova.manager(which also gets called in case of nova volume-update.) https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L1078
In swap_volume() of nova.compute, migrate_volume_completion() method of cinder.volume.manager is called which deletes the old volume after detaching it in case of cinder migrate attach volume, but in case of nova volume-update the new_volume_id gets returned from migrate_volume_completion() method of cinder.volume.api itself (It does not give call to manager because migration status of new volume and old volume are None).
Hi,
I have tested this issue on latest master and it is reproducible by admin user as well i.e., volumes are remaining in 'attaching' and 'detaching' states by admin user and normal user after running nova volume-update command.
I have found one launchpad issue related to migrate attach volume which is already fixed, /bugs.launchpad .net/cinder/ +bug/1316079
https:/
Two different patches were submitted for this issue, but the cinder patch ( https:/ /review. openstack. org/#/c/ 101932/ 1 ) was merged in Juno and nova patch ( https:/ /review. openstack. org/#/c/ 101933/ 3 ) was merged in Kilo1.
So, volume update works properly on stable/juno because nova patch was not merged.
In nova patch, attach and detach calls are moved from swap_volume() method of nova.compute. manager to migrate_ volume_ completion( ) method of cinder. volume. manager.
IMO, while fixing this cinder migrate attach volume issue, nova volume-update was not tested properly.
When we migrate volume which is attached to the instance, cinder calls update_ server_ volume( ) method of nova from cinder. volume. manager, which internally calls swap_volume() method of nova.manager(which also gets called in case of nova volume-update.) /github. com/openstack/ cinder/ blob/master/ cinder/ volume/ manager. py#L1078
https:/
In swap_volume() of nova.compute, migrate_ volume_ completion( ) method of cinder. volume. manager is called which deletes the old volume after detaching it in case of cinder migrate attach volume, but in case of nova volume-update the new_volume_id gets returned from migrate_ volume_ completion( ) method of cinder.volume.api itself (It does not give call to manager because migration status of new volume and old volume are None).
https:/ /github. com/openstack/ cinder/ blob/master/ cinder/ volume/ api.py# L1136
Hence volumes are remaining in 'attaching' and 'detaching' states.