vm fails with error vif_type=binding_failed using gre tunnels

Bug #1303998 reported by Phil Hopkins
66
This bug affects 12 people
Affects Status Importance Assigned to Milestone
neutron
Invalid
Medium
Edgar Magana

Bug Description

I am running Icehouse r-1 on Ubuntu 12.04. Whenever I try to launch a VM it immediately goes into error state. The log file fo rnova-compute shows the following:

 http_log_req /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:173
2014-04-07 19:15:32.888 2866 DEBUG neutronclient.client [-] RESP:{'date': 'Mon, 07 Apr 2014 19:15:32 GMT', 'status': '204', 'content-length
': '0', 'x-openstack-request-id': 'req-92a58024-6cd6-4ef3-bd81-f579bd057445'}
 http_log_resp /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
2014-04-07 19:15:32.888 2866 DEBUG nova.network.api [req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a f1c5b087cd9e
412daf2360c0cf83a5c6] Updating cache with info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/dist-packages/nova/network/api.py:
74
2014-04-07 19:15:32.909 2866 ERROR nova.compute.manager [req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a f1c5b087
cd9e412daf2360c0cf83a5c6] [instance: a85f771d-13d2-4cba-88f6-6c26a5cc7f37] Error: Unexpected vif_type=binding_failed

<--snip-->

2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 858, in unplug_vifs
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher self.vif_driver.unplug(instance, vif)
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 798, in unplug
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher _("Unexpected vif_type=%s") % vif_type)
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher NovaException: Unexpected vif_type=binding_failed
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher
2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] Returning exception Unexpected vif_type=binding_failed to caller
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher
2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] Returning exception Unexpected vif_type=binding_failed to caller

full log file for nova-compute at: http://paste.openstack.org/show/75244/
Log file for /var/log/neutron/openvswitch-agent.log is at: http://paste.openstack.org/show/75245/

Tags: ml2
Revision history for this message
Phil Hopkins (phil-hopkins-a) wrote :

Also I have set:
agent_down_time = 75

and

report_interval = 5

So the pervious error like this cannot be the proble.

Changed in neutron:
milestone: none → icehouse-rc2
importance: Undecided → High
Revision history for this message
Phil Hopkins (phil-hopkins-a) wrote :
Revision history for this message
Robert Kukura (rkukura) wrote :

Looks to me like "tunnel_type = gre" should be "tunnel_types = gre" in the [ovs] section of http://paste.openstack.org/show/75253/, assuming that's where openvswitch-agent is getting its config.

Revision history for this message
Phil Hopkins (phil-hopkins-a) wrote :

Changed to tunnel_types and the problem remains

Revision history for this message
Phil Hopkins (phil-hopkins-a) wrote :

This was due to the entry: mechanism driver = openvswitch

which should have been mechanism_driver = openvswitch

We need to change to a request the we have a validate the config file function.

It would make life easier for all of us like me that can't type!!! and save a lot of time debugging.

Thierry Carrez (ttx)
Changed in neutron:
milestone: icehouse-rc2 → none
tags: added: icehouse-rc-potential
Changed in neutron:
importance: High → Wishlist
status: New → Triaged
importance: Wishlist → Low
tags: added: ml2
removed: icehouse-rc-potential
Changed in neutron:
importance: Low → Medium
Revision history for this message
Kashyap Chamarthy (kashyapc) wrote :
Download full text (4.2 KiB)

I just hit a similar issue (note: I didn't make a typo Phil mentioned in comment #5) while I was testing on Fedora 20 with Neutron+ML2+OVS+GRE with IceHouse release . My setup is more completely described here[1].

---------------------------------
.
.
.
     2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3431, in to_xml
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] disk_info, rescue, block_device_info)
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3247, in get_guest_config
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] flavor)
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 384, in get_config
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] _("Unexpected vif_type=%s") % vif_type)
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] NovaException: Unexpected vif_type=binding_failed
      2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38]
      2014-05-13 07:06:32.846 29455 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected vif_type=binding_failed
---------------------------------

  [1] https://www.redhat.com/archives/rdo-list/2014-May/msg00060.html

ml2_config.ini and nova.conf
------------------------------------

ml2 plugin:

    $ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^#
    [ml2]
    type_drivers = gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    [ml2_type_flat]
    [ml2_type_vlan]
    [ml2_type_gre]
    tunnel_id_ranges = 1:1000
    [ml2_type_vxlan]
    [securitygroup]
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True

nova.conf:

    $ cat /etc/nova/nova.conf | grep -v ^$ | grep -v ^#
    [DEFAULT]
    logdir = /var/log/nova
    state_path = /var/lib/nova
    lock_path = /var/lib/nova/tmp
    volumes_dir = /etc/nova/volumes
    dhcpbridge = /usr/bin/nova-dhcpbridge
    dhcpbridge_flagfile = /etc/nova/nova.conf
    force_dhcp_release = True
    injected_network_template = /usr/share/nova/interfaces.template
    libvirt_nonblocking = True
    libvirt_use_virtio_for_bridges=True
    libvirt_inject_partition = -1
    #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
    #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
    #iscsi_helper = tgtadm
    sql_connection = mysql://nova:nova@192.169.142.97/nova
    compute_driver = libvirt.LibvirtDriver
    libvirt_type=qemu
    rootwrap_config = /etc/nova/rootwrap.conf
    auth_strategy = key...

Read more...

Revision history for this message
Kashyap Chamarthy (kashyapc) wrote :

Update: I resolved the above with these Neutron/Nova configurations[1]. More specifically, altering ml2_conf.ini as below has made the guest boot successfully from a user tenant:

    $ cat plugins/ml2/ml2_conf.ini | grep -v ^$ | grep -v ^#
    [ml2]
    type_drivers = gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    [ml2_type_flat]
    [ml2_type_vlan]
    [ml2_type_gre]
    tunnel_id_ranges = 1:1000
    [ml2_type_vxlan]
    [securitygroup]
    firewall_driver =
    neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True
    [ovs]
    local_ip = 192.169.142.97
    [agent]
    tunnel_types = gre
    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[1] http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt

Revision history for this message
David Miller (magicglowingbox) wrote :

I just hit this issue and wasted a lot of time on it.

The 'bug' is that the default installed Ubuntu ml2_conf.ini file includes the line:

# Example: mechanism drivers = openvswitch,mlnx

I duplicated that line in my config and then changed the value to what I required.

Upstream corrected issue on April 9, 2014:
https://github.com/openstack/neutron/commit/a7da625571a5acb161246e62713da81526a8d86b

Revision history for this message
Edgar Magana (emagana) wrote :
Changed in neutron:
assignee: nobody → Edgar Magana (emagana)
assignee: Edgar Magana (emagana) → nobody
status: Triaged → Invalid
assignee: nobody → Edgar Magana (emagana)
Revision history for this message
Nitish (bestofnithish) wrote :

I just hit a similar issue. I have done the configurations exactly as given in the manuals. But I get the error that "NovaException: Unexpected vif_type=binding_failed". I have changed the line "# Example: mechanism drivers = openvswitch,mlnx" and changed the value to what I required, but I still get that error. I am attaching the logs and the configuration parameters.

Revision history for this message
Harm Weites (harmw) wrote :

Just experienced the same issue. You might want to check the target of plugin.ini (/etc/neutron/plugins/ml2/ml2_conf.ini) and make sure the neutron-openvswitch-agent initscript loads that file (on L18). This is broken on CentOS 6 with RDO Icehouse packages, where it tries to load the openvswitch plugin's config, while it should be using ML2.

Revision history for this message
Nodir Kodirov (knodir) wrote :

I am hitting the same problem. I have Icehouse on Ubuntu 14.04 LTS. I could successfully deploy VM instances via nova-network and now I am trying to create a VM with neutron networking. I tried all possible configurations listed above without any luck. I still get "NovaException: Unexpected vif_type=binding_failed" problem during VM instantiation.

I have 2 node deployment. Controller node is running nova-conductor, nova-consoleauth, nova-cert, nova-scheduler and following neutron agents - Metadata agent, L3 agent, Open vSwitch agent, DHCP agent.

Compute node is running nova-compute and neutron-plugin-openvswitch-agent.

Just to clarify, can I completely abandon nova-network if I use neutron? Or do I still need to run it on Controller or Compute node even with neutron?

Here are my configurations http://paste.openstack.org/show/108551/

Any help is appreciated. Thanks!

Nodir.

Revision history for this message
Aunkumar ELR (elrarun) wrote :

Why is this bug invalid? Please provide the solution.

Revision history for this message
Tom Fifield (fifieldt) wrote :

Hi,

Future readers of this bug.

Sometimes this can happen if you've done something like backed out a package install half way through and it leaves the bridge in an interesting state.

As recommended in https://ask.openstack.org/en/question/47533/instance-fails-to-spawn-with-unexpected-vif_typebinding_failed/

try (on the compute node):

ovs-vsctl del-br br-int
ovs-vsctl add-br br-int

and then reboot the compute node

Revision history for this message
Dongwon Cho (dongwoncho) wrote :

It's happening on my deployment as well which is Mitaka and deleting and adding br-int does not work for me.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.