Snapshot failure with VMwareVCDriver

Bug #1184807 reported by dan wendlandt
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Vui Lam
Grizzly
Fix Released
High
Gary Kotton
VMwareAPI-Team
Fix Committed
High
Vui Lam

Bug Description

I am unable to get snapshots working in my dev setup with the VCDriver.

snapshot API call claims to succeed, but we get an internal exception (below) and the snapshot stays in 'queued' status in Horizon.

The relevant code is here: https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L515

 It seems like the underlying snapshot succeeds, but the attempt to copy the disk afterward fails. Browsing the datastore, I see that a vmware-tmp directory was created, but I do not see any files in it.

space/stack/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-27 17:15:04.615 DEBUG nova.virt.vmwareapi.driver [-] Task [CreateSnapshot_Task] (returnval){
   value = "task-123"
   _type = "Task"
 } status: success from (pid=4595) _poll_task /extraspace/stack/nova/nova/virt/vmwareapi/driver.py:576
2013-05-27 17:15:04.615 DEBUG nova.virt.vmwareapi.vmops [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] [instance: 0d044e1b-074b-47de-9002-de5d87230aa5] Created Snapshot of the VM instance from (pid=4595) _create_vm_snapshot /extraspace/stack/nova/nova/virt/vmwareapi/vmops.py:477
2013-05-27 17:15:04.616 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Making synchronous call on conductor ... from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-27 17:15:04.616 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] MSG_ID is 43832e8692c64fdeabba8b34c531b682 from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-27 17:15:04.617 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] UNIQUE_ID is ff2a695783ba459e8aef5a0097eefb95. from (pid=4595) _add_unique_id /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-27 17:15:04.981 DEBUG nova.openstack.common.lockutils [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Got semaphore "compute_resources" for method "update_usage"... from (pid=4595) inner /extraspace/stack/nova/nova/openstack/common/lockutils.py:186
2013-05-27 17:15:04.982 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Making synchronous call on conductor ... from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-27 17:15:04.982 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] MSG_ID is 8de1b35ad1624e4cbbd25dca3b7ff41e from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-27 17:15:04.983 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] UNIQUE_ID is 96b0cd1f45e84bc486863e9bab4b1b8f. from (pid=4595) _add_unique_id /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-27 17:15:05.174 DEBUG nova.virt.vmwareapi.vmops [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] [instance: 0d044e1b-074b-47de-9002-de5d87230aa5] Copying disk data before snapshot of the VM from (pid=4595) _copy_vmdk_content /extraspace/stack/nova/nova/virt/vmwareapi/vmops.py:522
2013-05-27 17:15:05.222 WARNING nova.virt.vmwareapi.driver [-] Task [CopyVirtualDisk_Task] (returnval){
   value = "task-124"
   _type = "Task"
 } status: error The requested operation is not implemented by the server.
2013-05-27 17:15:05.224 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Making synchronous call on conductor ... from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-27 17:15:05.224 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] MSG_ID is 838e0c86a1b04e46856ed43797442f6f from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-27 17:15:05.225 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] UNIQUE_ID is bcb86dcabbb040bf81bfa0a0676e4b14. from (pid=4595) _add_unique_id /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-27 17:15:05.244 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Making synchronous call on conductor ... from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-27 17:15:05.244 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] MSG_ID is 41c6cb58361f4686b717f3e3f3074178 from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-27 17:15:05.244 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] UNIQUE_ID is c732adbadd424eb1ac56d2e259041600. from (pid=4595) _add_unique_id /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-27 17:15:05.624 DEBUG nova.openstack.common.lockutils [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Got semaphore "compute_resources" for method "update_usage"... from (pid=4595) inner /extraspace/stack/nova/nova/openstack/common/lockutils.py:186
2013-05-27 17:15:05.625 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Making synchronous call on conductor ... from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-27 17:15:05.626 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] MSG_ID is 206d42da6b5947caaaa3473d80744544 from (pid=4595) multicall /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-27 17:15:05.626 DEBUG nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] UNIQUE_ID is 2ab81258b1c8413e83eb278d4d3539a2. from (pid=4595) _add_unique_id /extraspace/stack/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-27 17:15:05.672 ERROR nova.openstack.common.rpc.amqp [req-5a19f94a-6e87-4196-8bbd-4ea396e2f04f demo demo] Exception during message handling
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/openstack/common/rpc/amqp.py", line 433, in _process_data
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp **args)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/exception.py", line 98, in wrapped
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp temp_level, payload)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/exception.py", line 75, in wrapped
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/compute/manager.py", line 214, in decorated_function
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp pass
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/compute/manager.py", line 200, in decorated_function
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/compute/manager.py", line 242, in decorated_function
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info())
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/compute/manager.py", line 229, in decorated_function
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/compute/manager.py", line 1887, in snapshot_instance
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp self.driver.snapshot(context, instance, image_id, update_task_state)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/virt/vmwareapi/driver.py", line 180, in snapshot
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp self._vmops.snapshot(context, instance, name, update_task_state)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/virt/vmwareapi/vmops.py", line 537, in snapshot
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp _copy_vmdk_content()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/virt/vmwareapi/vmops.py", line 533, in _copy_vmdk_content
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp self._session._wait_for_task(instance['uuid'], copy_disk_task)
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/extraspace/stack/nova/nova/virt/vmwareapi/driver.py", line 559, in _wait_for_task
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp ret_val = done.wait()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp return hubs.get_hub().switch()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp return self.greenlet.switch()
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp NovaException: The requested operation is not implemented by the server.
2013-05-27 17:15:05.672 TRACE nova.openstack.common.rpc.amqp

Tags: vmware
Revision history for this message
dan wendlandt (danwent) wrote :

It seems like CopyVirtualDisk_Task is still valid for vCenter 5.0: http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.apiref.doc_50%2Fvim.VirtualDiskManager.html

Also, not sure if it matters, but I have a single node cluster with a VMFS datastore.

Revision history for this message
dan wendlandt (danwent) wrote :

Also, on the vCenter, I do not see any info in the Tasks pane about this task, though I do see the proceeding snapshoth calls listed as having been successful.

Revision history for this message
Shawn Hartsock (hartsock) wrote :

After looking at:
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L511
and
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L529

I believe the problem may be related to:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=900

Specifically: "When using snapshots, a virtual machine's virtual disks can be comprised of multiple VMDK files which are part of an interdependent chain."

The code at L511 appears to assume a single VMDK file which is not the case when working with snapshots (in VMware vSphere)

Changed in nova:
importance: Undecided → Low
Revision history for this message
Shawn Hartsock (hartsock) wrote :

No work around. Snapshot system fails. Importance: critical: affects all users.

Changed in nova:
status: New → Confirmed
importance: Low → Critical
Tracy Jones (tjones-i)
Changed in nova:
assignee: nobody → Tracy Jones (tjones-i)
Revision history for this message
Tracy Jones (tjones-i) wrote :

It looks to my like this behavior has changed since this bug was written. When i snapshot both from horizon and from the command line via nova image-create it the call doesn't go into nova snapshot at all - in face it appears it's just making a copy from glance - which is really not the desired outcome. I'll try to see what changed in this area.

Dan - you were selecting the instance from horizon and pushing the create snapshot button right? Another difference is that there is no state - it just appears in the images list with a 0 length file….

deepika (ddeepikadurai)
Changed in nova:
status: Confirmed → New
Changed in nova:
status: New → Confirmed
importance: Critical → High
Revision history for this message
dan wendlandt (danwent) wrote :

Tracy, I believe I did the following:

- access the /project/instances/ page of horizon showing all instances.
- click "create snapshot".

My guess is that at least on the command line you're doing something different than me by invoking nova image-create, which seeks to create a glance image by snapshoting a VM.

Revision history for this message
dan wendlandt (danwent) wrote :

I believe tempest also throws a snapshot error when running with the vSphere driver. Maybe check what tempest invokes from the CLI for comparison.

Vui Lam (vui)
Changed in nova:
assignee: Tracy Jones (tjones-i) → Vui Lam (vui)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/40298

Changed in nova:
status: Confirmed → In Progress
Changed in nova:
milestone: none → next
Changed in nova:
assignee: Vui Lam (vui) → Tracy Jones (tjones-i)
Vui Lam (vui)
summary: - Snapshot failure with VMware driver
+ Snapshot failure with VMwareVCDriver
Changed in nova:
assignee: Tracy Jones (tjones-i) → Vui Lam (vui)
Changed in nova:
assignee: Vui Lam (vui) → Gary Kotton (garyk)
Changed in nova:
assignee: Gary Kotton (garyk) → Tracy Jones (tjones-i)
Changed in nova:
assignee: Tracy Jones (tjones-i) → Vui Lam (vui)
Changed in nova:
assignee: Vui Lam (vui) → Gary Kotton (garyk)
Changed in nova:
milestone: next → havana-rc1
milestone: havana-rc1 → next
Changed in nova:
assignee: Gary Kotton (garyk) → Vui Lam (vui)
Changed in openstack-vmwareapi-team:
status: New → In Progress
importance: Undecided → High
Changed in nova:
milestone: next → havana-rc1
Changed in openstack-vmwareapi-team:
assignee: nobody → Vui Lam (vui)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/40298
Committed: http://github.com/openstack/nova/commit/61bfac8881dd6a71a572a54b2ea1680248fc4bc4
Submitter: Jenkins
Branch: master

commit 61bfac8881dd6a71a572a54b2ea1680248fc4bc4
Author: Vui Lam <email address hidden>
Date: Mon Aug 5 00:37:47 2013 -0700

    Fix snapshot failure with VMwareVCDriver

    The snapshot operation was failing because it calls
    VirtualDiskManager.CopyVirtualDisk with a destination disk spec, which
    is not supported when called through VC. The fix is to not supply a spec
    when calling through VC, in which case the disk is consolidated and
    copied without type transformation.

    While the fix is to use spec-less CopyVirtualDisk in VC, doing so in ESX
    too will result in unintended transformation (because it ESX will
    default to busLogic/preallocated), so the use of the spec was retained
    in ESX-mode instead.

    Another issue found and fixed is that the name of the snapshot image is
    incorrectly set when uploading to glance.

    The following tempest tests are now unbroken against the VC and ESX
    nodes:
      tempest.api.compute.images.
         test_images_oneserver.ImagesOneServerTest{JSON,XML}.
            test_create_delete_image
            test_create_second_image_when_first_image_is_being_saved
            test_delete_image_of_another_tenant
         test_list_image_filters
      tempest.api.compute.test_authorization.AuthorizationTestJSON
      tempest.api.compute.test_authorization.AuthorizationTestXML

    Fixes bug 1184807

    Change-Id: I7ec49859fb2842cc02cdc87a6434aa58612a2964

Changed in nova:
status: In Progress → Fix Committed
pocketlion (lhx1031)
Changed in openstack-vmwareapi-team:
status: In Progress → Fix Released
Revision history for this message
pocketlion (lhx1031) wrote :
Download full text (16.4 KiB)

After git clone the newest repository on sep23 2013 and reinstall devstack, when i did a snapshot but it still failed.

This time, from the vCenter task history i could see both the "Create virtual machine snapshot" and the "Copy virtual disk" operation are completed successfully. There was a "Copy virtual disk" operation failure before and now in the new code in the devstack repository, it is fixed i think. https://bugs.launchpad.net/nova/+bug/1184807, https://review.openstack.org/#/c/40298/18

From the horizon, the new created snapshot image status was firstly "queued" and then "deleted". And the snapshot image is not in the list when i do a nova image-list, glance image-list or look at the horizon image&snapshot list.

From the screen "g-reg" log, I found it seemed like the snapshot image was created and then deleted before glance tried to upload the image. The output log is shown as followed:

2013-09-23 12:49:46.634 13386 INFO glance.registry.api.v1.images [61f1aefc-e56b-481e-ac5d-23e5c8a1ab6c 1e1e314becc94d2ebe8246f0a36ca99a 09ee20f776914ad7983bb2ace867623a] Successfully retrieved image 66e47135-8576-49af-a474-47a11de0c46d
2013-09-23 12:50:09.126 13386 DEBUG keystoneclient.middleware.auth_token [-] Authenticating user token __call__ /opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:532
2013-09-23 12:50:09.126 13386 DEBUG keystoneclient.middleware.auth_token [-] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _remove_auth_headers /opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:591
2013-09-23 12:50:09.126 13386 DEBUG keystoneclient.middleware.auth_token [-] Returning cached token a06afb4e1371592a52ee6cb53b0e2bae _cache_get /opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:982
2013-09-23 12:50:09.127 13386 DEBUG glance.api.policy [-] Loaded policy rules: {u'context_is_admin': 'role:admin', u'default': '@', u'manage_image_cache': 'role:admin'} load_rules /opt/stack/glance/glance/api/policy.py:75
2013-09-23 12:50:09.127 13386 DEBUG routes.middleware [-] Matched GET /images/66e47135-8576-49af-a474-47a11de0c46d __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:100
2013-09-23 12:50:09.127 13386 DEBUG routes.middleware [-] Route path: '/images/{id}', defaults: {'action': u'show', 'controller': <glance.common.wsgi.Resource object at 0x279ba50>} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:102
2013-09-23 12:50:09.127 13386 DEBUG routes.middleware [-] Match dict: {'action': u'show', 'controller': <glance.common.wsgi.Resource object at 0x279ba50>, 'id': u'66e47135-8576-49af-a474-47a11de0c46d'} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:103
2013-09-23 12:50:09.138 13386 INFO glance.registry.api.v1.images [fa8c8425-0637-45cc-a80f-c188a787ad41 1e1e314becc94d2ebe8246f0a36ca99a 09ee20f776914ad7983bb2ace867623a] Successfully retrieved image 66e47135-8576-49af-a474-47a11de0c46d
2013-09-23 12:50:...

Revision history for this message
Vui Lam (vui) wrote :

Hi pocketlion,

I am looking into the reported issue. Can you also upload the nova compute log as well? Thanks.

Revision history for this message
Vui Lam (vui) wrote :

Just to add:

https://review.openstack.org/40298 addressed the error encountered during while make a consolidated copy of instance's disk. The copied disk is then uploaded to glance and subsequently deleted. Should an unhandled exception be thrown during the driver's snapshot operation, the just-created snapshot entry along with any uploaded data will be removed from the glance server during cleanup. Given what you reported, it appears that disk copy succeeded. Is it possible that your devstack environment which likely house the glance image repository actually ran out of storage space while receiving a large snapshot image upload?

Tracy Jones (tjones-i)
Changed in openstack-vmwareapi-team:
status: Fix Released → Fix Committed
Gary Kotton (garyk)
tags: added: grizzly-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/grizzly)

Fix proposed to branch: stable/grizzly
Review: https://review.openstack.org/49371

Thierry Carrez (ttx)
Changed in nova:
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/grizzly)

Reviewed: https://review.openstack.org/49371
Committed: http://github.com/openstack/nova/commit/bc9495d2930936618e49e569af03915b3add5bf1
Submitter: Jenkins
Branch: stable/grizzly

commit bc9495d2930936618e49e569af03915b3add5bf1
Author: Gary Kotton <email address hidden>
Date: Wed Oct 2 06:44:27 2013 -0700

    Fix snapshot failure with VMwareVCDriver

    The snapshot operation was failing because it calls
    VirtualDiskManager.CopyVirtualDisk with a destination disk spec, which
    is not supported when called through VC. The fix is to not supply a spec
    when calling through VC, in which case the disk is consolidated and
    copied without type transformation.

    While the fix is to use spec-less CopyVirtualDisk in VC, doing so in ESX
    too will result in unintended transformation (because it ESX will
    default to busLogic/preallocated), so the use of the spec was retained
    in ESX-mode instead.

    Another issue found and fixed is that the name of the snapshot image is
    incorrectly set when uploading to glance.

    The following tempest tests are now unbroken against the VC and ESX
    nodes:
      tempest.api.compute.images.
         test_images_oneserver.ImagesOneServerTest{JSON,XML}.
            test_create_delete_image
            test_create_second_image_when_first_image_is_being_saved
            test_delete_image_of_another_tenant
         test_list_image_filters
      tempest.api.compute.test_authorization.AuthorizationTestJSON
      tempest.api.compute.test_authorization.AuthorizationTestXML

    Fixes bug 1184807

    (cherry picked from commit 61bfac8881dd6a71a572a54b2ea1680248fc4bc4)

    Conflicts:

            nova/tests/virt/vmwareapi/test_vmwareapi.py
            nova/virt/vmwareapi/driver.py
            nova/virt/vmwareapi/vmops.py

    Change-Id: I08717a06f23dd1abfd5ccae7606a7ecb1453dc8c

tags: added: in-stable-grizzly
Thierry Carrez (ttx)
Changed in nova:
milestone: havana-rc1 → 2013.2
Alan Pevec (apevec)
tags: removed: grizzly-backport-potential in-stable-grizzly
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.