VMware: unable to launch and instance with NSX|V3 neutron plugin

Bug #1549288 reported by Gary Kotton
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Gary Kotton
Mitaka
In Progress
Medium
Roman Podoliaka

Bug Description

n-cpu log:
2016-02-22 21:52:35.450 ERROR oslo_vmware.common.loopingcall [-] in fixed duration looping call
2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall Traceback (most recent call last):
2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", line 76, in _inner2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall self.f(*self.args, **self.kw)
2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 428, in _poll_task
2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall raise task_ex2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall VimFaultException: Invalid configuration for device '0'.
2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall Faults: ['InvalidDeviceSpec']
2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall 2016-02-22 21:52:35.451 ERROR nova.compute.manager [req-ce1da942-7b51-4a54-830b-cfdb43a12ac8 admin demo] [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Instance fail
ed to spawn
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Traceback (most recent call last):2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/compute/manager.py", line 2155, in _bu
ild_resources
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] yield resources2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/compute/manager.py", line 2009, in _bu
ild_and_run_instance2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] block_device_info=block_device_info)
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 406, i
n spawn
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] admin_password, network_info, block_device_info)
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 689, in spawn
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] metadata)2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 303, in
 build_virtual_machine2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] config_spec, self._root_resource_pool)
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1302, in create_vm
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] {'ostype': config_spec.guestId})2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.
py", line 204, in __exit__
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] six.reraise(self.type_, self.value, self.tb)2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1287,
 in create_vm
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] task_info = session._wait_for_task(vm_create_task)2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 635, i
n _wait_for_task2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return self.wait_for_task(task_ref)
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 386, in wait_for_task
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return evt.wait()2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py",
line 121, in wait2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return hubs.get_hub().switch()
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return self.greenlet.switch()2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/l
oopingcall.py", line 76, in _inner2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] self.f(*self.args, **self.kw)
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 428, in _poll_task
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] raise task_ex2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] VimFaultException: Invalid configuration for device '0'.
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Faults: ['InvalidDeviceSpec']
2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0]

2016-02-22 21:52:35.470 WARNING suds.client [-] Web service reported a SOAP processing fault using an unexpected HTTP status code 200. Reporting as an internal server error.
2016-02-22 21:52:35.470 ERROR suds.client [-] <?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="urn:vim25" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
   <ns1:Body>
      <ns0:RetrievePropertiesEx>
         <ns0:_this type="PropertyCollector">propertyCollector</ns0:_this>
         <ns0:specSet>
            <ns0:propSet>
               <ns0:type>VirtualMachine</ns0:type>
               <ns0:all>false</ns0:all>
               <ns0:pathSet>config.files.vmPathName</ns0:pathSet>
               <ns0:pathSet>runtime.powerState</ns0:pathSet>
               <ns0:pathSet>datastore</ns0:pathSet>
            </ns0:propSet>
            <ns0:objectSet>
               <ns0:obj type="VirtualMachine">vm-35</ns0:obj>
               <ns0:skip>false</ns0:skip>
            </ns0:objectSet>
         </ns0:specSet>
         <ns0:options>
            <ns0:maxObjects>1</ns0:maxObjects>
         </ns0:options>
      </ns0:RetrievePropertiesEx>
   </ns1:Body>
</SOAP-ENV:Envelope>
2016-02-22 21:52:35.471 DEBUG oslo_vmware.api [-] Fault list: [ManagedObjectNotFound] from (pid=23024) _invoke_api /usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py:326
2016-02-22 21:52:35.471 ERROR nova.virt.vmwareapi.vmops [req-ce1da942-7b51-4a54-830b-cfdb43a12ac8 admin demo] [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Destroy instance failed
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Traceback (most recent call last):
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1008, in _destroy_instance
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] lst_properties)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 627, in _call_method
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return self.invoke_api(module, method, self.vim, *args, **kwargs)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 347, in invoke_api
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return _invoke_api(module, method, *args, **kwargs)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 122, in func
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return evt.wait()
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return hubs.get_hub().switch()
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] return self.greenlet.switch()
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", line 123, in _inner
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] idle = self.f(*self.args, **self.kw)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 95, in _func
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] result = f(*args, **kwargs)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] File "/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 331, in _invoke_api
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] details=excep.details)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] ManagedObjectNotFoundException: The object has already been deleted or has not been completely created
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Cause: Server raised fault: 'The object has already been deleted or has not been completely created'
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Faults: [ManagedObjectNotFound]
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0] Details: {'obj': 'vm-35'}
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: fb488363-759e-4da5-a57e-01756bdf9cb0]

Tags: vmware nsx
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/284121

Changed in nova:
assignee: nobody → Gary Kotton (garyk)
status: New → In Progress
Gary Kotton (garyk)
Changed in nova:
importance: Undecided → High
Matt Riedemann (mriedem)
tags: added: vmware
tags: added: nsx
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/284121
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=9ca37ab19c6490c4c5929982d3b882a3506d6006
Submitter: Jenkins
Branch: master

commit 9ca37ab19c6490c4c5929982d3b882a3506d6006
Author: Gary Kotton <email address hidden>
Date: Wed Feb 24 05:10:25 2016 -0800

    VMware: make the opaque network attachment more robust

    Ensure that the correct network ID is used when attaching the
    instance to the opaque network.

    Change-Id: I6bd615370075d44232d28f5ecebc34c08075daec
    Closes-bug: #1549288
    Depends-On: Iea09105912f2a8d8766f02e71b45163e233a0eac

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/mitaka)

Fix proposed to branch: stable/mitaka
Review: https://review.openstack.org/299842

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/mitaka)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: stable/mitaka
Review: https://review.openstack.org/299842
Reason: Not appropriate given the dependent changes on the vmware nsx driver aren't in mitaka.

Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote : Fix included in openstack/nova 14.0.0.0b1

This issue was fixed in the openstack/nova 14.0.0.0b1 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.