Routers do not work across network segments

Bug #1883845 reported by Reason li
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
networking-vpp
New
Undecided
Onong Tayeng

Bug Description

VPP version:vpp v20.01-release built by root on 8e5d994b3d26 at 2020-01-29T22:24:10
vpp-agent : master

Two network sections 100.100.100.0/24(GW 100.100.100.1, interface eth1.1234) and 200.200.200.0/24(GW 200.200.200.1, interface eth1.2000) were created. Virtual machines VM1(IP100.100.100.15) and VM2(IP200.200.200.1) were respectively created based on the network. Routers were created based on VPP-Router.VM1 can communicate to its own gateway 100.100.100.1, and VM2 can communicate to its own gateway 200.200.200.1, but VM1 cannot communicate with VM2. May I ask what BUG caused this problem?

[root@control01 /]# vppctl
    _______ _ _ _____ ___
 __/ __/ _ \ (_)__ | | / / _ \/ _ \
 _/ _// // / / / _ \ | |/ / ___/ ___/
 /_/ /____(_)_/\___/ |___/_/ /_/

vpp# show interface
              Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
eth1 1 up 9000/0/0/0 rx packets 6165
                                                                    rx bytes 627126
                                                                    tx packets 173
                                                                    tx bytes 10422
eth1.1234 2 up 0/0/0/0 rx packets 4074
                                                                    rx bytes 410434
                                                                    tx packets 115
                                                                    tx bytes 6900
eth1.2000 4 up 0/0/0/0 rx packets 2091
                                                                    rx bytes 216692
                                                                    tx packets 58
                                                                    tx bytes 3522
local0 0 down 0/0/0/0
loop0 3 up 1500/0/0/0 rx packets 4074
                                                                    rx bytes 337102
                                                                    tx packets 230
                                                                    tx bytes 12880
                                                                    drops 3959
                                                                    ip4 3959
loop1 5 up 1500/0/0/0 rx packets 2091
                                                                    rx bytes 179054
                                                                    tx packets 116
                                                                    tx bytes 6580
                                                                    drops 2033
                                                                    ip4 2026
                                                                    ip6 8
vpp# show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[default-route:1, nat-hi:2, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
    [0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
    [0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
    [0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
    [0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
    [0] [@0]: dpo-drop ip4
ipv4-VRF:1, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[API:3, adjacency:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:7 to:[0:0]]
    [0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:8 to:[0:0]]
    [0] [@0]: dpo-drop ip4
100.100.100.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:15 buckets:1 uRPF:14 to:[0:0]]
    [0] [@0]: dpo-drop ip4
100.100.100.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:13 to:[0:0]]
    [0] [@4]: ipv4-glean: loop0: mtu:1500 fffffffffffffa163ee7b8630806
100.100.100.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:18 to:[0:0]]
    [0] [@2]: dpo-receive: 100.100.100.1 on loop0
100.100.100.11/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:17 to:[0:0]]
    [0] [@5]: ipv4 via 100.100.100.11 loop0: mtu:1500 fa163ec378d1fa163ee7b8630800
100.100.100.15/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:26 to:[0:0]]
    [0] [@5]: ipv4 via 100.100.100.15 loop0: mtu:1500 fa163ef84ad7fa163ee7b8630800
100.100.100.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:16 to:[0:0]]
    [0] [@0]: dpo-drop ip4
200.200.200.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:20 buckets:1 uRPF:21 to:[0:0]]
    [0] [@0]: dpo-drop ip4
200.200.200.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:20 to:[0:0]]
    [0] [@4]: ipv4-glean: loop1: mtu:1500 fffffffffffffa163ef197700806
200.200.200.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:22 buckets:1 uRPF:25 to:[1:84]]
    [0] [@2]: dpo-receive: 200.200.200.1 on loop1
200.200.200.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:24 to:[0:0] via:[1:84]]
    [0] [@5]: ipv4 via 200.200.200.3 loop1: mtu:1500 fa163e5421b3fa163ef197700800
200.200.200.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:21 buckets:1 uRPF:23 to:[0:0]]
    [0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:10 to:[0:0]]
    [0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:9 to:[0:0]]
    [0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:11 to:[0:0]]
    [0] [@0]: dpo-drop ip4
vpp#

Tags: dpdk l3 router vpp
Revision history for this message
Reason li (lireason) wrote :
Revision history for this message
Onong Tayeng (onong) wrote :

VM1(100.100.100.0/24, eth1.1234) and VM2(200.200.200.0/24, eth1.2000) belong to two different tenant networks. Why would you expect them to be able to ping each other on the fly?

Revision history for this message
Reason li (lireason) wrote :

VPP provides routing, and I wanted to use it, so I created a router, but found that even if I did, I couldn't communicate with each other,I wonder what's wrong with the router provided by VPP?

Revision history for this message
Onong Tayeng (onong) wrote :

Could you please describe what you are trying to do and your environment in some detail?

Revision history for this message
Reason li (lireason) wrote :

I installed VPP in Docker and hoped that VPP could replace the routing function provided by L3_agent. Now neutron_server plug-in could create a router in VPP, but I found that the routing function could not be used.It could also be because there is something wrong with my configuration.

Revision history for this message
Onong Tayeng (onong) wrote :

Are you using networking-vpp?

Revision history for this message
Onong Tayeng (onong) wrote :

How did you create the two routers in VPP?

Revision history for this message
Reason li (lireason) wrote :

neutron_server send date to etcd, then vpp-agent(networking-vpp) Listening ETCD, then calling interface creates the router in the VPP, Now the problem is that I have successfully created the router, and the virtual machine can ping its gateway, but the routing function is not good, and it cannot communicate across network segments

Revision history for this message
Onong Tayeng (onong) wrote :

Ok, thanks for the info. I have some more clarity now.

The two VMs (VM1 and VM2) are in different tenant networks and hence they wont be able to ping each other on the fly. You need to add a route for that to happen. Could you please try the following commands in VPP:

vpp# ip route 200.200.200.0/24 table 1 via 200.200.200.1 next-hop-table 2
vpp# ip route 100.100.100.0/24 table 2 via 100.100.100.1 next-hop-table 1

Revision history for this message
Reason li (lireason) wrote :

thank you, I have another question. Does Networking-VPP support VXLAN? If so, how to configure it?When adding the routing interface of VXLAN, TypeError: 'NoneType' will be reported in networking- VPP.Refer to relevant materials, vXlan-GPE supported by NETWORKING - VPP,Can you tell me the difference between VXLAN, VXLan-GPE, and GPE?
The following is the data that the routing VPP-agent receives from etCD. If the field 'net_type' is VXLAN, it will report an error. How to deal with this?
09:18:48 [read-all] /networking-vpp/nodes/10.180.210.219/routers/4ae9fe63-d1c5-43b9-8162-6b97c92852d1/6a2a253a-add3-4f93-85c3-dfd4992f35e5 (modified 3982)
 > {
 > "prefixlen": 24,
 > "subnet_id": "546ff6db-2dd8-4aac-8f6f-8052cf95c3ed",
 > "segmentation_id": 200,
 > "fixed_ips": [
 > {
 > "subnet_id": "546ff6db-2dd8-4aac-8f6f-8052cf95c3ed",
 > "ip_address": "20.20.20.1"
 > }
 > ],
 > "mtu": 1500,
 > "network_id": "686f3dbf-e240-4c77-90f3-52fabbefe8e0",
 > "gateway_ip": "20.20.20.1",
 > "vrf_id": 1,
 > "net_type": "vlan",
 > "loopback_mac": "fa:16:3e:16:87:e2",
 > "port_id": "6a2a253a-add3-4f93-85c3-dfd4992f35e5",
 > "physnet": "physnet1",
 > "is_ipv6": false
 > }

Revision history for this message
Onong Tayeng (onong) wrote :

We support VXLAN-GPE although it currently has some issues that we are working on. The net_type is "gpe" and not "vxlan":

network create --provider-network-type gpe --provider-segment 99 net0

About the difference between stock VXLAN and VXLAN-GPE, I am afraid I am not too well versed on that. But will get back to you with more details or will get someone from the team to provide the details.

In the meantime, could you please share what is it that you are trying to do with VPP/networking-vpp? A high-level picture will do.

Revision history for this message
Reason li (lireason) wrote :

I see ,VPP does not currently support VXLAN network types.

I want to use VPP for L3 forwarding acceleration.

Revision history for this message
Ian Wells (ijw-ubuntu) wrote :

To update this:

- we're looking into what was going on with your router, because it looks a bit suspicious.

- VXLAN and GPE (strictly 'VXLAN-GPE') are two different ways of using VXLAN packets. The control plane behaves differently, but if you're using it for a VM-to-VM network within your cloud you shouldn't see any difference. GPE networks are not suitable for provider networks.

I don't think you were clear in your answer - you are using networking-vpp both for its networks and its routers, correct?

Onong Tayeng (onong)
Changed in networking-vpp:
assignee: nobody → Onong Tayeng (onong)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.