Comment 13 for bug 1464259

Revision history for this message
Matt Riedemann (mriedem) wrote :

Added nova since I think this is a bug in nova.

In the test_create_ebs_image_and_check_boot test scenario:

1. create volume1 from an image
2. boot server1 from volume1 (wait for the server to be ACTIVE)
3. create snapshot from server1 (creates image and volume snapshots)
4. delete server1
5. create server2 from the image snapshot (don't wait for it to be ACTIVE)
6. delete server2 (could still be building/attaching volumes in the background)
7. cleanup

Because the image snapshot has the volume1 snapshot BDM information, when we create server2 it constructs a snapshot bdm from the image meta here:

https://github.com/openstack/nova/blob/master/nova/compute/api.py#L729

And when booting server2 we use that snapshot bdm to create volume2:

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1727

https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L506

https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L391

That's where we see this logged:

http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_06_776

2015-12-07 05:51:06.776 INFO nova.virt.block_device [req-0467c440-1f5b-4a74-9ba8-d266d6b0b182 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] [instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] Booting with volume None at /dev/vda

Since the snapshot bdm doesn't have a volume_id it logs None.

This is the volume that's created though:

http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_07_467

2015-12-07 05:51:07.467 DEBUG keystoneclient.session [req-0467c440-1f5b-4a74-9ba8-d266d6b0b182 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] RESP: [202] Content-Length: 797 X-Compute-Request-Id: req-05b0da1b-3c95-41cd-a3c2-919b1ca9662d Connection: keep-alive Date: Mon, 07 Dec 2015 05:51:07 GMT Content-Type: application/json X-Openstack-Request-Id: req-05b0da1b-3c95-41cd-a3c2-919b1ca9662d
RESP BODY: {"volume": {"status": "creating", "user_id": "f3dc192b88054a3d80283f9696183e4a", "attachments": [], "links": [{"href": "http://127.0.0.1:8776/v2/37d3bb6fbd8b459ebd1459afe434900a/volumes/19efe5d8-fa71-4569-b806-dfe2e0080b7f", "rel": "self"}, {"href": "http://127.0.0.1:8776/37d3bb6fbd8b459ebd1459afe434900a/volumes/19efe5d8-fa71-4569-b806-dfe2e0080b7f", "rel": "bookmark"}], "availability_zone": "nova", "bootable": "false", "encrypted": false, "created_at": "2015-12-07T05:51:07.178269", "description": "", "updated_at": null, "volume_type": "ceph", "name": "", "replication_status": "disabled", "consistencygroup_id": null, "source_volid": null, "snapshot_id": "366fab34-8494-47ee-925e-1505b7521744", "multiattach": false, "metadata": {}, "id": "19efe5d8-fa71-4569-b806-dfe2e0080b7f", "size": 1}}
 _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:216

Since the test immediately deletes the volume after it's created (but before it's fully active), we see things in the logs like this:

2015-12-07 05:51:15.251 DEBUG nova.compute.claims [req-0467c440-1f5b-4a74-9ba8-d266d6b0b182 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] [instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] Aborting claim: [Claim: 64 MB memory, 0 GB disk] abort /opt/stack/new/nova/nova/compute/claims.py:120

2015-12-07 05:51:15.280 DEBUG nova.compute.utils [req-0467c440-1f5b-4a74-9ba8-d266d6b0b182 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] [instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] Conflict updating instance 91817c91-6305-4e44-9f53-0eca5a27aa8d. Expected: {'task_state': [u'block_device_mapping']}. Actual: {'task_state': u'deleting'} notify_about_instance_usage /opt/stack/new/nova/nova/compute/utils.py:285

And we fail to detach the volume because we're checking for bdm.volume_id rather than snapshot_id:

http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_16_600

2015-12-07 05:51:16.600 DEBUG nova.compute.manager [req-6d0c06c0-2479-4854-87de-c9e4fc5d5cb4 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] [instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] Ignoring VolumeNotFound: Volume None could not be found. _shutdown_instance /opt/stack/new/nova/nova/compute/manager.py:2317

And because our bdm does not have volume_id set, we don't delete volume2 in cinder:

http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_16_651

2015-12-07 05:51:16.651 DEBUG nova.compute.manager [req-6d0c06c0-2479-4854-87de-c9e4fc5d5cb4 tempest-TestVolumeBootPatternV2-1635783358 tempest-TestVolumeBootPatternV2-232134174] [instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] terminating bdm BlockDeviceMapping(boot_index=0,connection_info='null',created_at=2015-12-07T05:51:06Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='volume',device_name='/dev/vda',device_type='disk',disk_bus='virtio',guest_format=None,id=101,image_id=None,instance=<?>,instance_uuid=91817c91-6305-4e44-9f53-0eca5a27aa8d,no_device=False,snapshot_id='366fab34-8494-47ee-925e-1505b7521744',source_type='snapshot',updated_at=2015-12-07T05:51:06Z,volume_id=None,volume_size=1) _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2341