Failed to create server due to "Unexpected vif_type=unbound"

Bug #1980382 reported by Stephen Finucane
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Triaged
Undecided
Unassigned

Bug Description

We just spotted a failure in the python-openstackclient CI [1][2]. A functional test for the 'openstack server create' failed as it didn't return any output when we expected it to. Looking at the logs, this is because the instance failed to create in nova. From n-cpu we see:

    DEBUG nova.network.os_vif_util [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] No conversion for VIF type unbound yet {{(pid=97953) nova_to_osvif_vif /opt/stack/nova/nova/network/os_vif_util.py:530}}
    ERROR nova.compute.manager [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] Instance failed to spawn: nova.exception.InternalError: Unexpected vif_type=unbound
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] Traceback (most recent call last):
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] File "/opt/stack/nova/nova/compute/manager.py", line 2728, in _build_resources
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] yield resources
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] File "/opt/stack/nova/nova/compute/manager.py", line 2487, in _build_and_run_instance
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] self.driver.spawn(context, instance, image_meta,
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4340, in spawn
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] xml = self._get_guest_xml(context, instance, network_info,
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7311, in _get_guest_xml
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] conf = self._get_guest_config(instance, network_info, image_meta,
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6925, in _get_guest_config
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] config = self.vif_driver.get_config(
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] File "/opt/stack/nova/nova/virt/libvirt/vif.py", line 600, in get_config
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] raise exception.InternalError(_('Unexpected vif_type=%s') % vif_type)
    ERROR nova.compute.manager [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] nova.exception.InternalError: Unexpected vif_type=unbound

Looking at the q-svc logs, we see:

  WARNING neutron.plugins.ml2.plugin [req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 req-c372ca6e-78a4-4f09-976b-c74d5f169c66 service neutron] Concurrent port binding operations failed on port a2fb8af2-d4df-4b29-bd3f-5591aa8819d2

I'm not yet sure whether this is an issue with nova, neutron, or neutronclient (which is trying retries) but I've reported it against nova as the place this issue starts.

[1] https://review.opendev.org/c/openstack/python-openstackclient/+/844268
[2] https://zuul.opendev.org/t/openstack/build/b5c09ce1dbdd42228f5f2928d9df6178

Tags: neutron
Revision history for this message
Stephen Finucane (stephenfinucane) wrote :
Download full text (67.1 KiB)

DEBUG oslo_concurrency.lockutils [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] Lock "db9ae2d6-b717-4bbc-932a-24f5f94dfd8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s {{(pid=97953) inner /usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:386}}
DEBUG nova.compute.manager [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] Starting instance... {{(pid=97953) _do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:2286}}
DEBUG oslo_concurrency.lockutils [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s {{(pid=97953) inner /usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:386}}
DEBUG nova.virt.hardware [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] Require both a host and instance NUMA topology to fit instance on host. {{(pid=97953) numa_fit_instance_to_host /opt/stack/nova/nova/virt/hardware.py:2277}}
INFO nova.compute.claims [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] Claim successful on node ubuntu-focal-rax-dfw-0030231262
DEBUG nova.compute.provider_tree [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] Inventory has not changed in ProviderTree for provider: 1cf9d80e-4d80-4ccd-848c-26911fc4af74 {{(pid=97953) update_inventory /opt/stack/nova/nova/compute/provider_tree.py:180}}
DEBUG nova.scheduler.client.report [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] Inventory has not changed for provider 1cf9d80e-4d80-4ccd-848c-26911fc4af74 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 7950, 'reserved': 512, 'min_unit': 1, 'max_unit': 7950, 'step_size': 1, 'allocation_ratio': 1.5}, 'DISK_GB': {'total': 76, 'reserved': 0, 'min_unit': 1, 'max_unit': 76, 'step_size': 1, 'allocation_ratio': 1.0}} {{(pid=97953) set_inventory_for_provider /opt/stack/nova/nova/scheduler/client/report.py:894}}
DEBUG oslo_concurrency.lockutils [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.110s {{(pid=97953) inner /usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:400}}
DEBUG nova.compute.manager [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] Start building networks asynchronously for instance. {{(pid=97953) _build_resources /opt/stack/nova/nova/compute/manager.py:2664}}
DEBUG nova.compute.manager [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] [instance: db9ae2d6-b717-4bbc-932a-24f5f94dfd8f] Allocating IP information in the background. {{(pid=97953) _allocate_network_async /opt/stack/nova/nova/compute/manager.py:1844}}
DEBUG nova.network.neutron [None req-f9a5c6a8-ab26-4f1f-ab63-dd518edf32f3 admin admin] [instance: db9ae2d6-b71...

Revision history for this message
Sylvain Bauza (sylvain-bauza) wrote :

Hmm, looking at the job results, seems a very transient issue :
https://zuul.opendev.org/t/openstack/builds?job_name=osc-functional-devstack&project=openstack%2Fpython-openstackclient&skip=0

That being said, we maybe have some race or some situation we could fix.

Revision history for this message
Uggla (rene-ribaud) wrote :

Appears to be valid from my point of view despite it does not occur really often.

Changed in nova:
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.