vxlan tunnel does not get created

Bug #1618433 reported by Serguei Bezverkhi
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
neutron
Invalid
Undecided
Unassigned

Bug Description

in multinode scenario, vxlan tunnel between compute and network nodes does not get created. Latest master is used for neutron components.

vlan is configured as tenants type and vxlan is configured as a tunnel type. br-tun does not show vxlan interface.

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan
mechanism_drivers = openvswitch,l2population

[ml2_type_vlan]
network_vlan_ranges = physnet2:1:3999

[ml2_type_flat]
flat_networks = physnet1

[ml2_type_vxlan]
vni_ranges = 1:1000
vxlan_group = 239.1.1.1

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true

[ovs]
bridge_mappings = physnet1:br-ex, physnet2:br-tnts
ovsdb_connection=tcp:10.57.120.13:6640
local_ip=10.57.120.13

   Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal

Revision history for this message
Serguei Bezverkhi (sbezverk) wrote :
Download full text (6.5 KiB)

grepping neutron-openvswitch-agent logs does not return any obvious errors or issues.
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:27.662 1 DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.main [-] OVS.tunnel_bridge = br-tun log_opt_values /var/lib/kolla/venv/lib/python2.7/site-packages/oslo_config/cfg.py:2626
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.227 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): AddBridgeCommand(datapath_type=system, may_exist=True, name=br-tun) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.228 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=1): SetFailModeCommand(bridge=br-tun, mode=secure) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.229 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbSetCommand(table=Bridge, col_values=(('protocols', 'OpenFlow13'),), record=br-tun) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.231 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): SetControllerCommand(bridge=br-tun, targets=['tcp:127.0.0.1:6633']) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.351 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbGetCommand(column=controller, table=Bridge, record=br-tun) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.472 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): AddPortCommand(bridge=br-tun, may_exist=True, port=patch-int) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.477 1 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbGetCommand(column=datapath_id, table=Bridge, record=br-tun) do_commit /var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:31.478 1 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-0282f03c-de4a-4f37-9bf7-c4ce1c5d82ab - - - - -] Bridge br-tun has datapath-ID 0000468bc705e74d
/var/log/kolla/neutron/neutron-openvswitch-agent.log:2016-08-29 22:40:32.489 1 DEBUG neutron.agent.linux.async_process [-] Output received from [ovsdb-client monitor Interface name,ofport,external_ids --format=json]: {"data":[["0d6173c9-eb2d-475e-9d3e-55a1b20985a9","initial","patch-int",1,["map",[]]],["5bec1a89-0aba-44b7-b9c8-66861ee2310a","initial","tapd06039c5-83",11,["map",[["attached-mac","fa:16:3e:63:92:01"],["iface-id","d06039c5-83b4-416b-bd55-b7aa8a0f64...

Read more...

Revision history for this message
Assaf Muller (amuller) wrote :

You have l2population enabled, which is supposed to optimize tunnel endpoints. A tunnel will be created between two nodes only if both nodes instantiate ports that belong to the same tunnel. This includes DHCP ports, so if you spawn a VM on the compute node on a network served by a DHCP agent on the network node you should see a tunnel formed on demand.

Changed in neutron:
status: New → Incomplete
Revision history for this message
Assaf Muller (amuller) wrote :

"A tunnel will be created between two nodes only if both nodes instantiate ports that belong to the same tunnel." Of course I mean 'network'.

Revision history for this message
Serguei Bezverkhi (sbezverk) wrote :

in neutron-openvswitch-agent.py I found this code:

        if not self.l2_pop:
            self._setup_tunnel_port(self.tun_br, tun_name, tunnel_ip,
                                    tunnel_type)

It seems the tunnel interface gets created only when L2 population is not configured. I was under impression that l2_population is required for tunnel type vxlan.

Revision history for this message
Serguei Bezverkhi (sbezverk) wrote :

Thank you for your reply, by some reason I do not see the tunnel ever created. I have instances running on the second compute node and they do not get dhcp addresses from dhcp agent which is running on the network node. This traffic should go through vxlan tunnel right?

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

Your tenant_network_types = vlan suggests that the network is VLAN. So it's of no surprise you don't see any tunnels. They are not needed. The fact that you enabled the type driver in ml2 and set tunnel_types = vxlan on the agent does not mean that VXLAN will be used for all networks. It only means that IF you have a VXLAN network, it will be handled by the cluster.

I don't see an issue here, and the bug report does not even talk in terms of user visible issues. I suggest we mark it as invalid.

Changed in neutron:
status: Incomplete → Invalid
Revision history for this message
Assaf Muller (amuller) wrote :

That's right. For the record tunnels should be formed both with and without l2pop. L2pop is simply an optimization.

Do you have this part:
[agent]
l2_population = true

On *all* of your nodes, including controller and/or network nodes?

Revision history for this message
Assaf Muller (amuller) wrote :

> I have instances running on the second compute node and they do not get dhcp addresses from dhcp agent which is running on the network node.

Ihar pointed out that you're using VLAN tenant networking and not VXLAN, so l2pop and tunnels should not be relevant or needed here, and that something else is at fault.

Revision history for this message
Serguei Bezverkhi (sbezverk) wrote :

If there is no vlan trunk between compute nodes and the network node, is there any way to stitch them with vxlan tunnel?

Revision history for this message
Assaf Muller (amuller) wrote :

From Neutron's perspective, VXLAN/GRE/Geneve and VLAN tenant networking is equivalent, in the sense that it's a mechanism to provide connectivity for instances. How you hook up networking between your physical nodes is an exercise left for the reader. In the case of VLANs you need to make sure the entire VLAN range is trunked between all of your nodes. In the case of tunneling protocols you have to make sure there's a ping between your nodes (Using the tunneling IP), and that the relevant protocols and ports aren't blocked by any firewalls you may have laying around. To reiterate, Neutron uses tunneling to provide connectivity between instances, not as a way to setup networking between your *nodes* at the *infrastructure* level.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.