0 byte image is created with instance snapshot if instance is booted using volume

Bug #1666450 reported by Aleksei Chekunov
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Won't Fix
High
MOS Maintenance
10.0.x
Won't Fix
High
MOS Maintenance
9.x
Won't Fix
High
MOS Maintenance

Bug Description

When I boot an instance from volume and the take snapshot of that instance it creates a volume snapshot and also an image of 0 byte in glance. It's not possible to boot instance from this image.

Steps to Reproduce:
1. Deploy with ceph backend (fuel 9.2)
2. Boot instance from volume
3. Create a snapshot of this instance

tags: added: customer-found
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

It worth noting that the flavor probably has 0 as a disk size for the VMs.

Revision history for this message
Anton Chevychalov (achevychalov) wrote :

Please provide more information about bug. I see no images with 0 size even if I choose flavor with 0 as disk size for the VMs.

How did you find out about zero size image? Did you check this over glance cli? Please provide output of commands.

Revision history for this message
Anton Chevychalov (achevychalov) wrote :

Confirmed that bug is here. It does not matter which flavor you choose.

Revision history for this message
Anton Chevychalov (achevychalov) wrote :

After close investigation that bug splits on two issues:

1. 0 size image. That is normal and expected on that configuration. When you create a snapshot of instance that was spawned from Ceph volume, there are two actions happen. First one is creation of snapshot volume in Ceph (check Project->Compute->Volumes->Volume Snapshots page). Second one is creation of 0 size image in Glance which is a link to real volume created before. That Glance image contain no data (just an address of data) so it is normal for image to have 0 size.

2. According to bug description we have an issue with launch of an instance from such image. That could be a trouble, but I was not able to reproduce that. So I put that bug to incomplete until we have more information from customer side.

Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

Moving to Invalid after a month in Incomplete without any feedback.

Revision history for this message
sean redmond (sean-redmond1) wrote :

Anton, I have just ran into this and can confirm your findings here. However if you launch an instance from the volume snapshot and not the 0 bytes glance image you will get the instance booting as expected.

I think the issue here is that we are creating a 0 bytes glance image and a volume snapshot. the work flow should match that when the root disk is not a volume and the glance image should contain the contents of the root disk IMO.

Revision history for this message
Brin Zhang (zhangbailin) wrote :
Download full text (4.4 KiB)

Hi,Denis. The same problem I met with Aleksei.Step by step. Here is the information detected through glance cli.
[root@node16 ~]# glance image-show 14443701-5003-485b-b4fb-a0d752cf9161
+----------------------+----------------------------------------------------------------------------------+
| Property | Value |
+----------------------+----------------------------------------------------------------------------------+
| base_image_ref | |
| bdm_v2 | True |
| block_device_mapping | [{"guest_format": null, "boot_index": 0, "delete_on_termination": false, |
| | "no_device": null, "snapshot_id": "ae6efbb0-15e5-4c57-80ec-c3ebbe516dc8", |
| | "device_name": "/dev/vda", "disk_bus": "virtio", "image_id": null, |
| | "source_type": "snapshot", "tag": null, "device_type": "disk", "volume_id": |
| | null, "destination_type": "volume", "volume_size": 1}, {"guest_format": null, |
| | "boot_index": null, "delete_on_termination": false, "no_device": null, |
| | "snapshot_id": "17fb03b9-ca1f-4940-a100-9f3d5f14bd22", "device_name": |
| | "/dev/vdb", "disk_bus": null, "image_id": null, "source_type": "snapshot", |
| | "tag": null, "device_type": null, "volume_id": null, "destination_type": |
| | "volume", "volume_size": 1}] |
| boot_roles | admin,heat_stack_owner |
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2017-09-26T06:25:38Z |
| direct_url | rbd://4d09a2e1-b911-43af-bd8e-e7eba02feb9f/images/14443701-5003-485b-b4fb- |
| | a0d752cf9161/snap |
| disk_format | qcow2 |
| id | 14443701-5003-485b-b4fb-a0d752cf9161 |
| locations | [{"url": "rbd://4d09a2e1-b911-43af-bd8e-e7eba02feb9f/images/14443701-5003-485b- |
| | b4fb-a0d752cf9161/snap", "metadata": {}}] |
| min_disk | 2 |
| min_ram | 0 |
| name | 1-kuaizhao ...

Read more...

Revision history for this message
Brin Zhang (zhangbailin) wrote :

And another issue is use this 0 size glance to create a volume, throw an error in cidner.volume.driver: def _try_execute(), I think this problem is caused by the 0 bytes glance above,and that's the log:
I use "rbd import --pool volumes --order 22 /var/lib/cinder/conversion/tmp_vlewa volume-67ae2b55-2279-4aa7-9b5e-653ad1488a17 --new-format --id admin --cluster ceph" import a volume is ok in container of cinder_volume, so this command is ok.

Revision history for this message
Brin Zhang (zhangbailin) wrote :
Download full text (34.8 KiB)

I uploaded the attachment, but I didn't see cinder-volume.log, and the log that failed to create the volume from 0bytes's image below.

2017-09-29 16:30:10.399 31 INFO cinder.volume.manager [req-a28e3d76-c9ed-433e-a053-28bd32259122 ebed88398a18485783438e9d4c2220aa c85dd8ac9aed4579862874dbc9dc83b8 - default default] Copy volume to image completed successfully.
2017-09-29 17:45:40.347 31 INFO cinder.volume.flows.manager.create_volume [req-34a06c9d-8a44-451a-97bd-326a4473f35d ebed88398a18485783438e9d4c2220aa c85dd8ac9aed4579862874dbc9dc83b8 - default default] Volume 67ae2b55-2279-4aa7-9b5e-653ad1488a17: being created as image with specification: {'status': u'creating', 'image_location': (u'rbd://4d09a2e1-b911-43af-bd8e-e7eba02feb9f/images/14443701-5003-485b-b4fb-a0d752cf9161/snap', [{u'url': u'rbd://4d09a2e1-b911-43af-bd8e-e7eba02feb9f/images/14443701-5003-485b-b4fb-a0d752cf9161/snap', u'metadata': {}}]), 'volume_size': 10, 'volume_name': 'volume-67ae2b55-2279-4aa7-9b5e-653ad1488a17', 'image_id': '14443701-5003-485b-b4fb-a0d752cf9161', 'image_service': <cinder.image.glance.GlanceImageService object at 0x6635910>, 'image_meta': {u'status': u'active', u'file': u'/v2/images/14443701-5003-485b-b4fb-a0d752cf9161/file', u'virtual_size': None, u'name': u'1-kuaizhao', u'tags': [], u'container_format': u'bare', u'created_at': datetime.datetime(2017, 9, 26, 6, 25, 38, tzinfo=<iso8601.Utc>), u'disk_format': u'qcow2', u'locations': [{u'url': u'rbd://4d09a2e1-b911-43af-bd8e-e7eba02feb9f/images/14443701-5003-485b-b4fb-a0d752cf9161/snap', u'metadata': {}}], u'visibility': u'private', u'updated_at': datetime.datetime(2017, 9, 26, 6, 25, 39, tzinfo=<iso8601.Utc>), u'owner': u'c85dd8ac9aed4579862874dbc9dc83b8', u'protected': False, u'id': u'14443701-5003-485b-b4fb-a0d752cf9161', u'min_ram': 0, u'checksum': u'd41d8cd98f00b204e9800998ecf8427e', u'min_disk': 2, u'direct_url': u'rbd://4d09a2e1-b911-43af-bd8e-e7eba02feb9f/images/14443701-5003-485b-b4fb-a0d752cf9161/snap', 'properties': {u'boot_roles': u'admin,heat_stack_owner', u'bdm_v2': u'True', u'block_device_mapping': [{u'guest_format': None, u'boot_index': 0, u'no_device': None, u'volume_id': None, u'volume_size': 1, u'device_name': u'/dev/vda', u'disk_bus': u'virtio', u'image_id': None, u'source_type': u'snapshot', u'tag': None, u'device_type': u'disk', u'snapshot_id': u'ae6efbb0-15e5-4c57-80ec-c3ebbe516dc8', u'destination_type': u'volume', u'delete_on_termination': False}, {u'guest_format': None, u'boot_index': None, u'no_device': None, u'volume_id': None, u'volume_size': 1, u'device_name': u'/dev/vdb', u'disk_bus': None, u'image_id': None, u'source_type': u'snapshot', u'tag': None, u'device_type': None, u'snapshot_id': u'17fb03b9-ca1f-4940-a100-9f3d5f14bd22', u'destination_type': u'volume', u'delete_on_termination': False}], u'root_device_name': u'/dev/vda', u'base_image_ref': u''}, u'size': 0}}
2017-09-29 17:45:40.658 31 INFO cinder.image.image_utils [req-34a06c9d-8a44-451a-97bd-326a4473f35d ebed88398a18485783438e9d4c2220aa c85dd8ac9aed4579862874dbc9dc83b8 - default default] Image download 0.00 MB at 0.00 MB/s
2017-09-29 17:45:48.240 31 INFO cinder.image.image_utils [req-34a06c9d-8a44-451a-97bd-326a4473f35d e...

Revision history for this message
Brin Zhang (zhangbailin) wrote :

Has the bug been making any progress?

Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

We cannot reproduce the issue, if you still experience problems please create a diagnostic snapshot and share it.

Revision history for this message
Brin Zhang (zhangbailin) wrote :

Thank you for your reply.The OpenStack version I use is Ocata, I can reproduce this issue on any environment,but my environment is within the network environment, can not be provided to you.
So I made a document and I hope it will help you.

Revision history for this message
Oleksiy Molchanov (omolchanov) wrote :

I have tried to reproduce this on mitaka 9.2, I did:

1. Create volume from image
2. Boot instance from volume
3. Create snapshot from this instance
4. Boot instance from snapshot

Actually I had a 0 image after snapshoting, but #4 worked.

On the other hand I have faced an issue when I tried to create volume from this 0 image.

Revision history for this message
Brin Zhang (zhangbailin) wrote :

Year,I am on ocata (cinder 10.0.3,nova 15.0.6, glance 14.0).
I have faced this issue of create volume error from 0 image, too, screenshots were also made in the document above. And the #4 not worked yet.

Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

I confirm the issue - tested on MOS 7.0 and MOS 9.0 deploys,
issue is not present in MOS 7.0, but present in MOS 9.0. Looks like some kind of regression between Kilo and Mitaka, related to Ceph drivers for Glance and/or Cinder.

tags: added: support
Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

Specifically, "Create Volume" from such a 0-byte snapshot works in MOS 7.0 but not in MOS 9.0 (and 9.2).

Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

This is because of the commit https://github.com/openstack/python-glanceclient/commit/44d0b02c67ce7926f40377d9367a0f61124ed26d which fixes https://bugs.launchpad.net/glance/+bug/1472449. I'm not sure how was it working in Kilo, maybe there is also some other reason. So far it's unclear to me how should it be fixed since nodata-images (they are like links to the real storage outside of glance) are considered normal for glance, but glance doesn't return their data in the "data" call.

Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

To fix this we either should reinvent how cinder downloads images from glance when creating a volume or refactor nova's snapshot-from-volume creation code to provide valid metadata to glance when creating a snapshot of a volume. The both would take a pretty solid bunch of effort and testing, so I'm not sure we should fix it all, this is a cosmetic issue and maybe misunderstanding. Creation a volume from a volume snapshot (they live in cinder) always worked like a charm.
I would just recommend to avoid creating volumes from a "zero-byte" image in glance which represents snapshot. Just create a volume from a volume snapshot in cinder and ignore a zero-byte glance image. I believe, those glance zero-sized images are something from the very old times and was intended for another use case.

I'm closing this as Won't Fix, if you have objections feel free to re-open the bug report providing a good explanation why you cannot use volume snapshots and why these 0-sized glance images are essential for your case.

Changed in mos:
status: Confirmed → Won't Fix
Revision history for this message
Brin Zhang (zhangbailin) wrote :

I do use this use case,but your explanation is very good also, if time permits, I think this is probably behind the need to reconstruct the code.
Thank you very much.

Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

Our customer managed to find the root cause and the solution to this problem in another bug - https://bugs.launchpad.net/cinder/+bug/1523230. enable_force_upload=True should be used for Cinder if Ceph is the backend, which will restore original functionality available in Kilo and earlier. We should conditionally add this parameter into our deployment manifest.

Changed in mos:
status: Won't Fix → Confirmed
Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

Dmitry, I don't understand how should it help, since I don't see any call to this function. Did you try it in a lab?

Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

Denis,

I assumed customer's statement being true, my bad. I've tested and it does not help, indeed.

What I also found out is that if "Create a new Volume" is used to spawn an instance from a fake 0 byte image created via "Instance Snapshot", it works. This makes sense since in such case Cinder is doing the disk setup, and not Glance, and Cinder is able to get snapshot id from this image's metadata and create a new volume from the actual volume snapshot, which also gets created when "Instance Snapshot" is used.

So "Instance Snapshot" can be used, just have to always use "Create a new Volume" from such images, and use volume snapshot (which also gets created) to create a volume, not the 0 byte image.

Setting back to Won't Fix.

Changed in mos:
status: Confirmed → Won't Fix
Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

I've created a separate bug regarding failure in creation of a volume from a 0-byte image - https://bugs.launchpad.net/mos/+bug/1733450

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.