VMware: Instance creation fails using block device mapping in different cluster

Bug #1469655 reported by Chinmaya Bharadwaj
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Triaged
Medium
Unassigned

Bug Description

2015-06-29 14:24:49.211 DEBUG oslo_vmware.exceptions [-] Fault InvalidDatastorePath not matched. from (pid=21558) get_fault_class /usr/local/lib/python2.7/dist-packages/oslo_vmware/exceptions.py:250
2015-06-29 14:24:49.212 ERROR oslo_vmware.common.loopingcall [-] in fixed duration looping call
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall Traceback (most recent call last):
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", line 76, in _inner
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall self.f(*self.args, **self.kw)
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 417, in _poll_task
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall raise task_ex
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall VMwareDriverException: Invalid datastore path '[localdatastore] volume-c279ad39-f1f9-4861-9d00-2de8f6df7756/volume-c279ad39-f1f9-4861-9d00-2de8f6df7756.vmdk'.
2015-06-29 14:24:49.212 TRACE oslo_vmware.common.loopingcall
2015-06-29 14:24:49.212 ERROR nova.compute.manager [-] [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] Instance failed to spawn
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] Traceback (most recent call last):
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/compute/manager.py", line 2442, in _build_resources
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] yield resources
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/compute/manager.py", line 2314, in _build_and_run_instance
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] block_device_info=block_device_info)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 480, in spawn
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] admin_password, network_info, block_device_info)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 628, in spawn
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] instance, adapter_type)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 371, in attach_volume
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] self._attach_volume_vmdk(connection_info, instance, adapter_type)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 330, in _attach_volume_vmdk
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] vmdk_path=vmdk.path)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 71, in attach_disk_to_vm
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] vm_util.reconfigure_vm(self._session, vm_ref, vmdk_attach_config_spec)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1377, in reconfigure_vm
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] session._wait_for_task(reconfig_task)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 680, in _wait_for_task
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] return self.wait_for_task(task_ref)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 380, in wait_for_task
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] return evt.wait()
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] return hubs.get_hub().switch()
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] return self.greenlet.switch()
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", line 76, in _inner
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] self.f(*self.args, **self.kw)
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 417, in _poll_task
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] raise task_ex
2015-06-29 14:24:49.212 TRACE nova.compute.manager [instance: 0a55fe16-3a21-40f0-85f3-777d6254f215] VMwareDriverException: Invalid datastore path '[localdatastore] volume-c279ad39-f1f9-4861-9d00-2de8f6df7756/volume-c279ad39-f1f9-4861-9d00-2de8f6df7756.vmdk'.

Tags: vmware
Revision history for this message
Chinmaya Bharadwaj (acbharadwaj) wrote :

During instance creation with volume,

nova boot bdm1 --flavor 2 --image cirros-0.3.2-i386-disk --block-device-mapping sdb=c279ad39-f1f9-4861-9d00-2de8f6df7756::5

if the volume gets created in a different cluster/ different datacenter and instance gets created in another cluster, this bug is observed.

affects: horizon → nova
Changed in nova:
assignee: nobody → Chinmaya Bharadwaj (acbharadwaj)
tags: added: vmware
Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote :

@Chinmaya Bharadwaj (acbharadwaj):

Since you are set as assignee, I switch the status to "In Progress".

Changed in nova:
status: New → In Progress
Revision history for this message
Vipin Balachandran (vbala) wrote :

Please provide steps to reproduce this.

If the volume gets created in a datastore which is not accessible to instance's ESX host, it's a Cinder bug.

Revision history for this message
Chinmaya Bharadwaj (acbharadwaj) wrote :

@vipin:
 Agree that cinder relocated volume during attach. In normal attach volumes when instances are already created, cinder properly relocated the volume to the proper datastore.

As I mentioned in the the bug, when doing nova boot with block device mapping, instances are not created, hence Cinder won't know where the instance resides, so nova should do the relocation. Its doing while boot with volume now.

Revision history for this message
Chinmaya Bharadwaj (acbharadwaj) wrote :

steps to reproduce:

1) have multiple clusters/ datacenters
2) Nova boot with volume, ex:
nova boot bdm1 --flavor 2 --image cirros-0.3.2-i386-disk --block-device-mapping sdb=c279ad39-f1f9-4861-9d00-2de8f6df7756::5

Issue wont be seen if the volume gets created in the compute cluster itself. If so, delete the VM and move the volume to a datastore where the cluster's host wont have access to. Try nova boot again.

Changed in nova:
assignee: Chinmaya Bharadwaj (acbharadwaj) → John Garbutt (johngarbutt)
Matt Riedemann (mriedem)
Changed in nova:
assignee: John Garbutt (johngarbutt) → Chinmaya Bharadwaj (acbharadwaj)
summary: - VMware: Instance creation fails using block device mapping
+ VMware: Instance creation fails using block device mapping in different
+ cluster
Changed in nova:
importance: Undecided → Medium
Changed in nova:
assignee: Chinmaya Bharadwaj (acbharadwaj) → Gary Kotton (garyk)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Chinmaya Bharadwaj (<email address hidden>) on branch: master
Review: https://review.openstack.org/197192
Reason: Abandoning since https://review.openstack.org/#/c/218639/ addresses this.

Changed in nova:
assignee: Gary Kotton (garyk) → nobody
status: In Progress → Confirmed
Changed in nova:
assignee: nobody → Gary Kotton (garyk)
status: Confirmed → In Progress
Revision history for this message
Sean Dague (sdague) wrote :

There are no currently open reviews on this bug, changing
the status back to the previous state and unassigning. If
there are active reviews related to this bug, please include
links in comments.

Changed in nova:
status: In Progress → Confirmed
assignee: Gary Kotton (garyk) → nobody
Sean Dague (sdague)
Changed in nova:
assignee: nobody → Gary Kotton (garyk)
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.opendev.org/218639
Reason: This is super old and in merge conflict so I'm going to abandon. Can be restored and rebased if someone wants to continue working on this.

Matt Riedemann (mriedem)
Changed in nova:
assignee: Gary Kotton (garyk) → nobody
status: In Progress → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.