Volumesnapshot created from a cirros image can not be deleted with cli.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Invalid
|
Medium
|
haitao wang |
Bug Description
Brief Description
-----------------
Volume snapshot is created from a cirros, after created cli command is not deleting the snapshot neither with the snapshot id nor snapshot name.
Severity
--------
Major
Steps to Reproduce
------------------
From Horizon create a cirros image.
Once created go to the Actions drop down list and create a Volume.
Go to Project-
Once created go to the Actions drop down list and create a snapshot.
Go to Project-
Once snapshot from volume created go to the Actions drop down list launch and instance
Try to delete the volume and snapshot with cli command.
[wrsroot@
[wrsroot@
START with options: [u'volume', u'snapshot', u'delete', u'42daf9b6-
command: volume snapshot delete -> openstackclient
Using auth plugin: password
END return value: 0
[wrsroot@
START with options: [u'volume', u'snapshot', u'delete', u'Volumefromcir
command: volume snapshot delete -> openstackclient
Using auth plugin: password
END return value: 0
Expected Behavior
------------------
Applying the cli command should delete the snapshot volume coming from cirros image.
Actual Behavior
----------------
Snapshot volume is not deleted.
Reproducibility
---------------
100%
System Configuration
-------
Virtual Multinodo External Storage - 2 Controllers + 2 computes + 2 storage
Changed in starlingx: | |
assignee: | Cindy Xie (xxie1) → haitao wang (hwang85) |
Changed in starlingx: | |
status: | Confirmed → Invalid |
tags: |
added: stx.2019.05 removed: stx.2019.03 |
tags: |
added: stx.2.0 removed: stx.2019.05 |
This is likely upstream openstack behavior. It's not likely that this is something unique to StarlingX.
If this is standard openstack behavior, we may choose not to change it. Further investigation is required.
I wouldn't mark this as stx.2018.10 gating -- it's a very specific test scenario. Marking as stx.2019.03 instead.