VXLAN not enabled on StarlingX with containers
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Invalid
|
Low
|
ChenjieXu |
Bug Description
Title
-----
VXLAN not enabled on StarlingX with containers
Brief Description
-----------------
By setting up VXLAN datanetwork, assigning IP to interface, creating VXLAN tenant network and creating VM on VXLAN tenant network, the VM can't ping another VM on different host. The tunneling_ip of all ovs agents are the same "172.17.0.1" which is the IP of interface docker0. The tunneling_ip should be the IP assigned to the interface. The tunnel port on br-tun is not created. Thus VXLAN traffic can't go to another host.
Severity
--------
Critical
Steps to Reproduce
------------------
1. On active controller:
source /etc/platform/
system host-lock compute-0
system host-lock compute-1
system datanetwork-add tenant_vxlan vxlan --multicast_group 224.0.0.1 --ttl 255 --port_num 4789
system host-if-list -a compute-0
system host-if-list -a compute-1
system host-if-modify -m 1500 -n data0 -d tenant_vxlan -c data compute-0 ${DATA0IFUUID}
system host-if-modify -m 1500 -n data0 -d tenant_vxlan -c data compute-1 ${DATA0IFUUID}
system host-if-modify --ipv4-mode static compute-0 ${DATA0IFUUID}
system host-if-modify --ipv4-mode static compute-1 ${DATA0IFUUID}
system host-addr-add compute-0 ${DATA0IFUUID} 192.168.100.30 24
system host-addr-add compute-1 ${DATA0IFUUID} 192.168.100.40 24
system host-unlock compute-0
system host-unlock compute-1
2. After compute-0 and compute-1 rebooting, on active controller
export OS_CLOUD=
ADMINID=
openstack network segment range create tenant-vxlan-range --network-type vxlan --minimum 400 --maximum 499 --private --project ${ADMINID}
neutron net-create --tenant-id ${ADMINID} --provider:
neutron subnet-create --tenant-id ${ADMINID} --name subnet1 net1 192.168.101.0/24
openstack server create --image cirros --flavor m1.tiny --network net1 vm1
openstack server create --image cirros --flavor m1.tiny --network net1 vm2
Ensure vm1 and vm2 on different host.
vm1 ping vm2
Expected Behavior
------------------
vm1 ping vm2 successfully.
Actual Behavior
----------------
vm1 ping vm2 unsuccessfully.
System Configuration
-------
System mode: Standard 2+2 on Bare metals
Reproducibility
---------------
100%
Branch/Pull Time/Commit
-------
0306 ISO Image built for OVS DPDK Upgrade
Timestamp/Logs
--------------
+------
| Field | Value |
+------
| admin_state_up | True |
| agent_type | Open vSwitch agent |
| alive | True |
| availability_zone | |
| binary | neutron-
| configurations | { |
| | "integration_
| | "ovs_hybrid_plug": false, |
| | "in_distributed
| | "datapath_type": "netdev", |
| | "arp_responder_
| | "resource_
| | "min_unit": 1, |
| | "allocation_ratio": 1.0, |
| | "step_size": 1, |
| | "reserved": 0 |
| | }, |
| | "vhostuser_
| | "resource_
| | "devices": 5, |
| | "ovs_capabilities": { |
| | "datapath_types": [ |
| | "netdev", |
| | "system" |
| | ], |
| | "iface_types": [ |
| | "dpdk", |
| | "dpdkr", |
| | "dpdkvhostuser", |
| | "dpdkvhostuserc
| | "erspan", |
| | "geneve", |
| | "gre", |
| | "internal", |
| | "ip6erspan", |
| | "ip6gre", |
| | "lisp", |
| | "patch", |
| | "stt", |
| | "system", |
| | "tap", |
| | "vxlan" |
| | ] |
| | }, |
| | "extensions": [], |
| | "l2_population": true, |
| | "tunnel_types": [ |
| | "vxlan" |
| | ], |
| | "log_agent_
| | "enable_
| | "bridge_mappings": { |
| | "physnet0": "br-phy0" |
| | }, |
| | "tunneling_ip": "172.17.0.1" |
| | } |
| created_at | 2019-03-15 21:13:41 |
| description | |
| heartbeat_timestamp | 2019-03-20 21:25:16 |
| host | compute-0 |
| id | ec3192c9-
| started_at | 2019-03-20 16:15:27 |
| topic | N/A |
+------
Bridge br-tun
Controller "tcp:127.
fail_mode: secure
Port patch-int
Port br-tun
Last time install passed
-------
n/a
The tunnel interface is hard coded as docker0. You can find the code by following command: openstack_ helm
On active controller:
export OS_CLOUD=
kubectl -n openstack edit cm neutron-bin
The code is listed below: interface= "docker0" interface} " ] ; then
tunnel_ interface= $(ip -4 route list 0/0 | awk -F 'dev' '{ print $2; exit }' | awk '{ print $1 }') || exit 1
tunnel_
if [ -z "${tunnel_
# search for interface with default routing
# If there is not default gateway, exit
fi