In the test_create_ebs_image_and_check_boot test scenario:
1. create volume1 from an image
2. boot server1 from volume1 (wait for the server to be ACTIVE)
3. create snapshot from server1 (creates image and volume snapshots)
4. delete server1
5. create server2 from the image snapshot (don't wait for it to be ACTIVE)
6. delete server2 (could still be building/attaching volumes in the background)
7. cleanup
Because the image snapshot has the volume1 snapshot BDM information, when we create server2 it constructs a snapshot bdm from the image meta here:
2015-12-07 05:51:06.776 INFO nova.virt.block_device [req-0467c440-1f5b-4a74-9ba8-d266d6b0b182 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] [instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] Booting with volume None at /dev/vda
Since the snapshot bdm doesn't have a volume_id it logs None.
Added nova since I think this is a bug in nova.
In the test_create_ ebs_image_ and_check_ boot test scenario:
1. create volume1 from an image
2. boot server1 from volume1 (wait for the server to be ACTIVE)
3. create snapshot from server1 (creates image and volume snapshots)
4. delete server1
5. create server2 from the image snapshot (don't wait for it to be ACTIVE)
6. delete server2 (could still be building/attaching volumes in the background)
7. cleanup
Because the image snapshot has the volume1 snapshot BDM information, when we create server2 it constructs a snapshot bdm from the image meta here:
https:/ /github. com/openstack/ nova/blob/ master/ nova/compute/ api.py# L729
And when booting server2 we use that snapshot bdm to create volume2:
https:/ /github. com/openstack/ nova/blob/ master/ nova/compute/ manager. py#L1727
https:/ /github. com/openstack/ nova/blob/ master/ nova/virt/ block_device. py#L506
https:/ /github. com/openstack/ nova/blob/ master/ nova/virt/ block_device. py#L391
That's where we see this logged:
http:// logs.openstack. org/20/ 218120/ 3/check/ gate-tempest- dsvm-full- ceph/2349f2d/ logs/screen- n-cpu.txt. gz#_2015- 12-07_05_ 51_06_776
2015-12-07 05:51:06.776 INFO nova.virt. block_device [req-0467c440- 1f5b-4a74- 9ba8-d266d6b0b1 82 tempest- TestVolumeBootP atternV2- 1635783358 tempest- TestVolumeBootP atternV2- 232134174] [instance: 91817c91- 6305-4e44- 9f53-0eca5a27aa 8d] Booting with volume None at /dev/vda
Since the snapshot bdm doesn't have a volume_id it logs None.
This is the volume that's created though:
http:// logs.openstack. org/20/ 218120/ 3/check/ gate-tempest- dsvm-full- ceph/2349f2d/ logs/screen- n-cpu.txt. gz#_2015- 12-07_05_ 51_07_467
2015-12-07 05:51:07.467 DEBUG keystoneclient. session [req-0467c440- 1f5b-4a74- 9ba8-d266d6b0b1 82 tempest- TestVolumeBootP atternV2- 1635783358 tempest- TestVolumeBootP atternV2- 232134174] RESP: [202] Content-Length: 797 X-Compute- Request- Id: req-05b0da1b- 3c95-41cd- a3c2-919b1ca966 2d Connection: keep-alive Date: Mon, 07 Dec 2015 05:51:07 GMT Content-Type: application/json X-Openstack- Request- Id: req-05b0da1b- 3c95-41cd- a3c2-919b1ca966 2d 3d80283f9696183 e4a", "attachments": [], "links": [{"href": "http:// 127.0.0. 1:8776/ v2/37d3bb6fbd8b 459ebd1459afe43 4900a/volumes/ 19efe5d8- fa71-4569- b806-dfe2e0080b 7f", "rel": "self"}, {"href": "http:// 127.0.0. 1:8776/ 37d3bb6fbd8b459 ebd1459afe43490 0a/volumes/ 19efe5d8- fa71-4569- b806-dfe2e0080b 7f", "rel": "bookmark"}], "availability_ zone": "nova", "bootable": "false", "encrypted": false, "created_at": "2015-12- 07T05:51: 07.178269" , "description": "", "updated_at": null, "volume_type": "ceph", "name": "", "replication_ status" : "disabled", "consistencygro up_id": null, "source_volid": null, "snapshot_id": "366fab34- 8494-47ee- 925e-1505b75217 44", "multiattach": false, "metadata": {}, "id": "19efe5d8- fa71-4569- b806-dfe2e0080b 7f", "size": 1}} lib/python2. 7/dist- packages/ keystoneclient/ session. py:216
RESP BODY: {"volume": {"status": "creating", "user_id": "f3dc192b88054a
_http_log_response /usr/local/
Since the test immediately deletes the volume after it's created (but before it's fully active), we see things in the logs like this:
2015-12-07 05:51:15.251 DEBUG nova.compute.claims [req-0467c440- 1f5b-4a74- 9ba8-d266d6b0b1 82 tempest- TestVolumeBootP atternV2- 1635783358 tempest- TestVolumeBootP atternV2- 232134174] [instance: 91817c91- 6305-4e44- 9f53-0eca5a27aa 8d] Aborting claim: [Claim: 64 MB memory, 0 GB disk] abort /opt/stack/ new/nova/ nova/compute/ claims. py:120
2015-12-07 05:51:15.280 DEBUG nova.compute.utils [req-0467c440- 1f5b-4a74- 9ba8-d266d6b0b1 82 tempest- TestVolumeBootP atternV2- 1635783358 tempest- TestVolumeBootP atternV2- 232134174] [instance: 91817c91- 6305-4e44- 9f53-0eca5a27aa 8d] Conflict updating instance 91817c91- 6305-4e44- 9f53-0eca5a27aa 8d. Expected: {'task_state': [u'block_ device_ mapping' ]}. Actual: {'task_state': u'deleting'} notify_ about_instance_ usage /opt/stack/ new/nova/ nova/compute/ utils.py: 285
And we fail to detach the volume because we're checking for bdm.volume_id rather than snapshot_id:
http:// logs.openstack. org/20/ 218120/ 3/check/ gate-tempest- dsvm-full- ceph/2349f2d/ logs/screen- n-cpu.txt. gz#_2015- 12-07_05_ 51_16_600
2015-12-07 05:51:16.600 DEBUG nova.compute. manager [req-6d0c06c0- 2479-4854- 87de-c9e4fc5d5c b4 tempest- TestVolumeBootP atternV2- 1635783358 tempest- TestVolumeBootP atternV2- 232134174] [instance: 91817c91- 6305-4e44- 9f53-0eca5a27aa 8d] Ignoring VolumeNotFound: Volume None could not be found. _shutdown_instance /opt/stack/ new/nova/ nova/compute/ manager. py:2317
And because our bdm does not have volume_id set, we don't delete volume2 in cinder:
http:// logs.openstack. org/20/ 218120/ 3/check/ gate-tempest- dsvm-full- ceph/2349f2d/ logs/screen- n-cpu.txt. gz#_2015- 12-07_05_ 51_16_651
2015-12-07 05:51:16.651 DEBUG nova.compute. manager [req-6d0c06c0- 2479-4854- 87de-c9e4fc5d5c b4 tempest- TestVolumeBootP atternV2- 1635783358 tempest- TestVolumeBootP atternV2- 232134174] [instance: 91817c91- 6305-4e44- 9f53-0eca5a27aa 8d] terminating bdm BlockDeviceMapp ing(boot_ index=0, connection_ info='null' ,created_ at=2015- 12-07T05: 51:06Z, delete_ on_termination= True,deleted= False,deleted_ at=None, destination_ type='volume' ,device_ name='/ dev/vda' ,device_ type='disk' ,disk_bus= 'virtio' ,guest_ format= None,id= 101,image_ id=None, instance= <?>,instance_ uuid=91817c91- 6305-4e44- 9f53-0eca5a27aa 8d,no_device= False,snapshot_ id='366fab34- 8494-47ee- 925e-1505b75217 44',source_ type='snapshot' ,updated_ at=2015- 12-07T05: 51:06Z, volume_ id=None, volume_ size=1) _cleanup_volumes /opt/stack/ new/nova/ nova/compute/ manager. py:2341