Hi Eduardo and Dan, Here is a more detailed look at my setup: I am running the entire "lab" on a remote virtual machine. The host vm is Ubuntu 22.04. I am running vagrant on this vm to stand up 3 vms (what vagrant calls "boxes"). Two are Ubuntu 22.04 named rack-1-host-1 and rack-1-host-2 and the third is a cumulus-vx switch running cumulus linux v5.6.0 named rack-1-leaf-1 Note that I am leaning heavily on the Vagrantfile That Luis refers to in his blog post: https://luis5tb.github.io/bgp/2021/02/04/ovn-bgp-agent-testing-setup.html, but with some modifications rack-1-host-1 uses eth1 100.65.1.2/30 to connect to port swp1 on rack-1-leaf-1 100.65.1.1/30. It also has a loopback address of 99.99.1.1 was set as IP_HOST in the devstack local.conf which at a minimum means that it uses this IP for geneve tunnel endpoint. rack-1-host-2 uses eth1 100.65.1.6/30 to connect to port swp2 on rack-1-leaf-1 100.65.1.5/30. It also has a loopback address of 99.99.1.2 was set as IP_HOST in the devstack local.conf which at a minimum means that it uses this IP for geneve tunnel endpoint. It hosts vm1-provider, 172.24.4. Note that in order to get the geneve tunnel to come up (pass BFD probes) I had to add an iptables NAT rule on each compute node that would src packets from the loopback IP when the destination was the remote loopback IP. The default route on all three vagrant vms is out of the vagrant interface. This needs to stay in place in order to provide connectivity to endpoints outside the vagrant lab (e.g. package repos). Although Luis's blog indicates (and hist default frr config on the hosts supports) that the leaf switch should provide a default route to the hosts, in my lab I have adjusted the routing policy applied to BGP on the hosts to allow the installation of /32 routes from BGP. While you might not choose to do this in production because it could lead to the installation of thousands of /32 routes on each host, in this mini lab it is perfectly acceptable as I'll never have more than ten or so instances yielding /32 routes. As I've mentioned devstack spun up a provider network named public with ipv4 subnet 172.24.4.0/24. Here are the route tables from the host and the leaf switch: rack-1-leaf-1# show ip route Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP, T - Table, A - Babel, D - SHARP, F - PBR, f - OpenFabric, Z - FRR, > - selected route, * - FIB route, q - queued, r - rejected, b - backup t - trapped, o - offload failure K>* 0.0.0.0/0 [0/0] via 10.255.1.1, vagrant, 02w4d20h C>* 10.255.1.0/24 is directly connected, vagrant, 02w4d20h C>* 99.98.1.1/32 is directly connected, lo, 02w4d20h B>* 99.99.1.1/32 [200/0] via 100.65.1.2, swp1, weight 1, 02w2d00h B>* 99.99.1.2/32 [200/0] via 100.65.1.6, swp2, weight 1, 01w3d22h C>* 100.65.1.0/30 is directly connected, swp1, 02w4d20h C>* 100.65.1.4/30 is directly connected, swp2, 02w4d20h B>* 172.24.4.18/32 [200/0] via 100.65.1.2, swp1, weight 1, 23:28:11 B>* 172.24.4.31/32 [200/0] via 100.65.1.2, swp1, weight 1, 5d05h53m B>* 172.24.4.56/32 [200/0] via 100.65.1.6, swp2, weight 1, 00:26:55 B>* 172.24.4.93/32 [200/0] via 100.65.1.2, swp1, weight 1, 02w1d00h B>* 172.24.4.105/32 [200/0] via 100.65.1.6, swp2, weight 1, 00:26:55 B>* 172.24.6.148/32 [200/0] via 100.65.1.2, swp1, weight 1, 4d22h55m vagrant@rack-1-host-1:/opt/stack/devstack$ ip r default via 10.255.1.1 dev vagrant 10.255.1.0/24 dev vagrant proto kernel scope link src 10.255.1.130 99.98.1.1 nhid 316 via 100.65.1.1 dev eth1 proto bgp src 99.99.1.1 metric 20 99.99.1.2 nhid 316 via 100.65.1.1 dev eth1 proto bgp src 99.99.1.1 metric 20 100.64.1.0/30 dev eth2 proto kernel scope link src 100.64.1.2 100.65.1.0/30 dev eth1 proto kernel scope link src 100.65.1.2 100.65.1.6 via 100.65.1.1 dev eth1 172.24.4.0/24 dev br-ex proto kernel scope link src 172.24.4.1 172.24.4.18 nhid 327 dev bgp-nic proto bgp metric 20 172.24.4.31 nhid 327 dev bgp-nic proto bgp metric 20 172.24.4.56 nhid 316 via 100.65.1.1 dev eth1 proto bgp src 99.99.1.1 metric 20 172.24.4.93 nhid 327 dev bgp-nic proto bgp metric 20 172.24.4.105 nhid 316 via 100.65.1.1 dev eth1 proto bgp src 99.99.1.1 metric 20 172.24.6.148 nhid 327 dev bgp-nic proto bgp metric 20 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown vagrant@rack-1-host-2:~$ ip r default via 10.255.1.1 dev vagrant 10.255.1.0/24 dev vagrant proto kernel scope link src 10.255.1.235 99.98.1.1 nhid 103 via 100.65.1.5 dev eth1 proto bgp src 99.99.1.2 metric 20 99.99.1.1 via 100.65.1.5 dev eth1 proto static 100.64.1.4/30 dev eth2 proto kernel scope link src 100.64.1.6 100.65.1.2 via 100.65.1.5 dev eth1 proto static 100.65.1.4/30 dev eth1 proto kernel scope link src 100.65.1.6 172.24.4.18 nhid 103 via 100.65.1.5 dev eth1 proto bgp src 99.99.1.2 metric 20 172.24.4.31 nhid 103 via 100.65.1.5 dev eth1 proto bgp src 99.99.1.2 metric 20 172.24.4.56 nhid 101 dev bgp-nic proto bgp metric 20 172.24.4.93 nhid 103 via 100.65.1.5 dev eth1 proto bgp src 99.99.1.2 metric 20 172.24.4.105 nhid 101 dev bgp-nic proto bgp metric 20 172.24.6.148 nhid 103 via 100.65.1.5 dev eth1 proto bgp src 99.99.1.2 metric 20 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown The test instances are cirros vms: vm2-provider, eth0: 172.24.4.93/24, fa:16:3e:26:77:6a and lives on rack-1-host-1. It's openstack port ID is: 61318c8a-c449-4180-89da-791628576a0d with name vm2-port vm1-provider, etho: 172.24.4.56/24, fa:16:3e:dd:6b:48 and lives on rack-1-host-2. It's openstack port ID is: 80da0b47-f403-4144-b172-a2ae160252ed with name vm1-port Security groups are wide open: vagrant@rack-1-host-1:/opt/stack/devstack$ openstack security group rule list 1196b968-b137-4aa5-9789-424a7e905128 +--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+ | ID | IP Protocol | Ethertype | IP Range | Port Range | Direction | Remote Security Group | Remote Address Group | +--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+ | 0b7a5e97-cbd0-4ed1-a5e1-7b7d860e2fe6 | icmp | IPv4 | 0.0.0.0/0 | | ingress | None | None | | 37fc0e41-c984-4cf0-9579-116ebd89572d | None | IPv4 | 0.0.0.0/0 | | egress | None | None | | 58252566-7651-4746-85e4-3694dd117c22 | tcp | IPv4 | 0.0.0.0/0 | | ingress | None | None | | 5a731189-9c7a-4554-8eab-c8144966a591 | None | IPv6 | ::/0 | | egress | None | None | +--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+ ovn-trace seems to think the two vms are L2 adjacent if IIUC: vagrant@rack-1-host-1:/opt/stack/devstack$ sudo ovn-trace --summary public 'ip4.src == 172.24.4.93 && ip4.dst == 172.24.4.56 && ip.ttl == 64 && icmp4 && inport == "vm2-port" && eth.src == '$VM2PROVIDER_MAC' && eth.dst =='$VM1PROVIDER_MAC # icmp,reg14=0x5,vlan_tci=0x0000,dl_src=fa:16:3e:26:77:6a,dl_dst=fa:16:3e:dd:6b:48,nw_src=172.24.4.93,nw_dst=172.24.4.56,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=0,icmp_code=0 ingress(dp="public", inport="vm2-port") { reg0[15] = check_in_port_sec(); next; reg0[0] = 1; next; ct_next; ct_next(ct_state=est|trk /* default (use --ct to customize) */) { reg0[8] = 1; reg0[10] = 1; next; reg8[16] = 1; next; reg8[16] = 0; reg8[17] = 0; reg8[18] = 0; next; reg8[16] = 0; reg8[17] = 0; reg8[18] = 0; next; outport = "vm1-port"; output; egress(dp="public", inport="vm2-port", outport="vm1-port") { reg0[0] = 1; next; ct_next; ct_next(ct_state=est|trk /* default (use --ct to customize) */) { reg0[8] = 1; reg0[10] = 1; next; reg8[16] = 1; next; reg8[16] = 0; reg8[17] = 0; reg8[18] = 0; next; reg0[15] = check_out_port_sec(); next; output; /* output to "vm1-port", type "" */; }; }; }; }; Same for the opposite direction: sudo ovn-trace --summary public 'ip4.src == 172.24.4.56 && ip4.dst == 172.24.4.93 && ip.ttl == 64 && icmp4 && inport == "vm1-port" && eth.src == '$VM1PROVIDER_MAC' && eth.dst =='$VM2PROVIDER_MAC # icmp,reg14=0x4,vlan_tci=0x0000,dl_src=fa:16:3e:dd:6b:48,dl_dst=fa:16:3e:26:77:6a,nw_src=172.24.4.56,nw_dst=172.24.4.93,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=0,icmp_code=0 ingress(dp="public", inport="vm1-port") { reg0[15] = check_in_port_sec(); next; reg0[0] = 1; next; ct_next; ct_next(ct_state=est|trk /* default (use --ct to customize) */) { reg0[8] = 1; reg0[10] = 1; next; reg8[16] = 1; next; reg8[16] = 0; reg8[17] = 0; reg8[18] = 0; next; reg8[16] = 0; reg8[17] = 0; reg8[18] = 0; next; outport = "vm2-port"; output; egress(dp="public", inport="vm1-port", outport="vm2-port") { reg0[0] = 1; next; ct_next; ct_next(ct_state=est|trk /* default (use --ct to customize) */) { reg0[8] = 1; reg0[10] = 1; next; reg8[16] = 1; next; reg8[16] = 0; reg8[17] = 0; reg8[18] = 0; next; reg0[15] = check_out_port_sec(); next; output; /* output to "vm2-port", type "" */; }; }; }; }; But connectivity between the vms does not work: $ hostname vm2-provider $ ping 172.24.4.56 PING 172.24.4.56 (172.24.4.56) 56(84) bytes of data. ^C --- 172.24.4.56 ping statistics --- 11 packets transmitted, 0 received, 100% packet loss, time 10248ms $ hostname vm1-provider $ ping 172.24.4.93 PING 172.24.4.93 (172.24.4.93) 56(84) bytes of data. --- 172.24.4.93 ping statistics --- 8 packets transmitted, 0 received, 100% packet loss, time 7169ms However, connectivity from the compute nodes to the vms does work (i.e. routing works): vagrant@rack-1-host-1:/opt/stack/devstack$ ping 172.24.4.93 -c 1 PING 172.24.4.93 (172.24.4.93) 56(84) bytes of data. 64 bytes from 172.24.4.93: icmp_seq=1 ttl=64 time=0.352 ms --- 172.24.4.93 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms vagrant@rack-1-host-1:/opt/stack/devstack$ ping 172.24.4.56 -c 1 PING 172.24.4.56 (172.24.4.56) 56(84) bytes of data. 64 bytes from 172.24.4.56: icmp_seq=1 ttl=62 time=2.21 ms --- 172.24.4.56 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 2.206/2.206/2.206/0.000 ms vagrant@rack-1-host-2:~$ ping 172.24.4.93 -c 1 PING 172.24.4.93 (172.24.4.93) 56(84) bytes of data. 64 bytes from 172.24.4.93: icmp_seq=1 ttl=62 time=11.9 ms --- 172.24.4.93 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 11.910/11.910/11.910/0.000 ms vagrant@rack-1-host-2:~$ ping 172.24.4.56 -c 1 PING 172.24.4.56 (172.24.4.56) 56(84) bytes of data. 64 bytes from 172.24.4.56: icmp_seq=1 ttl=64 time=0.739 ms --- 172.24.4.56 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms FWIW I have also tested this when not adjusting the local node BGP policies to accept /32 routes and the connectivity still fails between instances. What else would you like to know/see?