neutron does not form mesh tunnel overly between different ml2 driver.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
neutron |
Won't Fix
|
Low
|
Unassigned |
Bug Description
* Summary: neutron does not form mesh tunnel overly between different ml2 driver.
* High level description: When using multiple neutron ml2/driver it is expected that vms on host with different ml2 backend should be able to comunicate as segmentatoin type/ids are centralised in neutron and are not backend specific. when using provider networks this work however when using vxlan
or other tunneld network that require unicast mesh networks to be created fails.
* Step-by-step reproduction steps:
deploy a multinode devstack with both linux bridge and ovs nodes.
on the linux bridge nodes set the vxlan dest_udp port to the inan value
so that it is the same port used by ovs.
[[post-
[vxlan]
udp_dstport=4789
and set the vxlan multi cast group to none to force unicast mode.
[ml2_type_vxlan]
vxlan_group=""
boot a vm on the same neutron network on both a linux bridge node and ovs node.
* Expected output:
in this case we would expect the ovs l2 agent to create
a unicast vxlan tunnel port on br-tun between the ovs node and the linux bridge node.
similarly we expect the linux bridge agent to configure the recipcal connection and update
the forwarding table with the ovs enpoints.
we would also expect the l2 agent on the ovs compute ndoe to create a vxlan tunnel port
to the networking node where the dhcp server is running.
when the vms are booted we would expect both vms to recive ips and security groups
correctly congifured we expect both vms to be able to ping each other.
* Actual output:
the ovs l2 agent only create unicast tunnels to other ovs nodes.
i did not check if the linux bridge agent set up its side of the connecttion for
ovs nodes but it did configure connectivy to other linux bridge nodes.
as a result network connectivy was partionioned with no cross backend connectivity possible.
this is different from the vlan and flat behaviour where network connectivity works as expected.
* Version:
** rocky RC1 nova sha: afe4512bf66c89a
** Centos 7.5
** DevStack
* Environment: libvirt/kvm with default devstack config/service
* Perceived severity: low (this prevents using hetrogeious backend with tunned networks
Changed in neutron: | |
assignee: | nobody → wangzengsen (zswang) |
Changed in neutron: | |
importance: | Undecided → Wishlist |
importance: | Wishlist → Low |
Changed in neutron: | |
assignee: | wangzengsen (zswang) → nobody |
Hi @sean mooney,
Does it work if you add the following to your devstack config?
* In Controller node, add l2population to mechanism_drivers.
[ml2]
mechanism_drivers = ...,l2population
* In Network node, OVS node and LB node, set l2_population to true
[vxlan]
l2_population = True