ICMPv6 Neighbor Advertisement packets from VM's link-local address dropped by OVS

Bug #2031087 reported by Tore Anderson
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Fix Released
Medium
Brian Haley

Bug Description

When a VM transmits an ICMPv6 Neighbour Advertisement packet from its link-local (fe80::/64) address, the NA packet ends up being dropped by the OVS and is not forwarded to the external provider network. This causes connectivity issues as the external router is unable to resolve the link-layer MAC address for the VM's link-local IPv6 address. NA packets from the VM's global IPv6 address are forwarded correctly.

Adding security group rule such as "Egress,IPv6,Any,Any,::/0" does *not* help, the drop rule appears to be built-in and not possible to override. However, disabling port security altogether does make the problem go away.

We are running OpenStack Antelope, neutron 22.0.2 and OVN 23.03. Platform is AlmaLinux 9.2, RDO packages.

We believe, but are not 100% sure, that this problem may have started after upgrading from OVN 22.12. Reverting the upgrade to confirm is unfortunately a complicated task, so we would like to avoid that if possible.

Tcpdump can be used to confirm that the packets vanish inside OVS. First, on the tap interface connected to the VM. We can here see the external router (fe80::669d:99ff:fe3a:3d58) transmit NS packets to the VM's solicited-node multicast address, and the VM (fe80::18:59ff:fe37:204a) responds with a unicast NA packet:

$ sudo tcpdump -i tapb7c872a4-a5 host fe80::669d:99ff:fe3a:3d58 and icmp6
08:41:24.201970 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
08:41:24.202004 IP6 fe80::18:59ff:fe37:204a > fe80::669d:99ff:fe3a:3d58: ICMP6, neighbor advertisement, tgt is fe80::18:59ff:fe37:204a, length 32
08:41:25.366752 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
08:41:25.366775 IP6 fe80::18:59ff:fe37:204a > fe80::669d:99ff:fe3a:3d58: ICMP6, neighbor advertisement, tgt is fe80::18:59ff:fe37:204a, length 32
08:41:26.374637 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
08:41:26.374693 IP6 fe80::18:59ff:fe37:204a > fe80::669d:99ff:fe3a:3d58: ICMP6, neighbor advertisement, tgt is fe80::18:59ff:fe37:204a, length 32

However, while tcpdumping the same traffic on the external interface (bond0) on the provider VLAN tag the network is using, the NA packets are no longer there:

$ sudo tcpdump -i bond0 vlan 882 and host fe80::669d:99ff:fe3a:3d58 and icmp6
08:41:24.201964 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
08:41:25.366747 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
08:41:26.374625 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32

This explains why there are so many NS packets - the router keeps retrying forever.

Compare this with NA packets from the VM's global address, which works as expected:

$ sudo tcpdump -ni tapb7c872a4-a5 ether host 64:9d:99:3a:3d:58 and icmp6 and net not fe80::/10
08:56:03.015378 IP6 2a02:c0:200:f012:ffff:0:1:1 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has 2a02:c0:200:f012:18:59ff:fe37:204a, length 32
08:56:03.015408 IP6 2a02:c0:200:f012:18:59ff:fe37:204a > 2a02:c0:200:f012:ffff:0:1:1: ICMP6, neighbor advertisement, tgt is 2a02:c0:200:f012:18:59ff:fe37:204a, length 32

$ sudo tcpdump -ni bond0 vlan 882 and ether host 64:9d:99:3a:3d:58 and icmp6 and net not fe80::/10
08:56:03.015292 IP6 2a02:c0:200:f012:ffff:0:1:1 > ff02::1:ff37:204a: ICMP6, neighbor solicitation, who has 2a02:c0:200:f012:18:59ff:fe37:204a, length 32
08:56:03.015539 IP6 2a02:c0:200:f012:18:59ff:fe37:204a > 2a02:c0:200:f012:ffff:0:1:1: ICMP6, neighbor advertisement, tgt is 2a02:c0:200:f012:18:59ff:fe37:204a, length 32

We can further confirm it by finding an explicit drop rule within OVS:

$ sudo ovs-appctl dpif/dump-flows br-int | grep drop
recirc_id(0),in_port(8),eth(src=02:18:59:37:20:4a),eth_type(0x86dd),ipv6(src=fe80::18:59ff:fe37:204a,proto=58,hlimit=255,frag=no),icmpv6(type=136,code=0),nd(target=fe80::18:59ff:fe37:204a,tll=02:18:59:37:20:4a), packets:104766, bytes:9009876, used:0.202s, actions:drop

We see that there are a ton of built-in default rules pertaining to NA packets:

$ sudo ovs-ofctl dump-flows br-int | grep -c icmp_type=136
178

This is not unexpected as ICMPv6 ND (NS/NA/RS/RA/etc) are essential parts of the IPv6 protocol (like ARP in IPv4), and should not be dropped even if the VM is using a "block everything" security group. Our assumption is that the logic in these rules are flawed somehow, so they inadvertently end up blocking the NA packets from the VM's link-local address.

We have been unable to reproduce the problem using ofproto/trace, probably because it does not allow to set the icmp_type attribute for some reason. If we add ",icmp_type=136" to the command line below, it fails with "prerequisites not met for setting icmp_type". We have no idea what that missing prerequisite could possibly be - any suggestions would be greatly appreciated.

$ sudo ovs-appctl ofproto/trace br-int in_port=161,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,icmp6,ipv6_src=fe80::18:59ff:fe37:204a,ipv6_dst=fe80::669d:99ff:fe3a:3d58
Flow: icmp6,in_port=161,vlan_tci=0x0000,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,ipv6_src=fe80::18:59ff:fe37:204a,ipv6_dst=fe80::669d:99ff:fe3a:3d58,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=no,icmp_type=0,icmp_code=0

bridge("br-int")
----------------
 0. in_port=161, priority 100, cookie 0x2f9439aa
    set_field:0x3e->reg13
    set_field:0x3f->reg11
    set_field:0x3d->reg12
    set_field:0x9->metadata
    set_field:0x2->reg14
    resubmit(,8)
 8. metadata=0x9, priority 50, cookie 0x59f248ee
    set_field:0/0x1000->reg10
    resubmit(,73)
    73. ipv6,reg14=0x2,metadata=0x9,dl_src=02:18:59:37:20:4a,ipv6_src=fe80::18:59ff:fe37:204a, priority 90, cookie 0x2f9439aa
            resubmit(,74)
        74. No match.
            drop
    move:NXM_NX_REG10[12]->NXM_NX_XXREG0[111]
     -> NXM_NX_XXREG0[111] is now 0
    resubmit(,9)
 9. metadata=0x9, priority 0, cookie 0xcc8526d3
    resubmit(,10)
10. metadata=0x9, priority 0, cookie 0xc47fdc5d
    resubmit(,11)
11. metadata=0x9, priority 0, cookie 0xddf6f6b9
    resubmit(,12)
12. ipv6,metadata=0x9, priority 100, cookie 0x26ff06cc
    set_field:0x1000000000000000000000000/0x1000000000000000000000000->xxreg0
    resubmit(,13)
13. metadata=0x9, priority 0, cookie 0xda44fc0c
    resubmit(,14)
14. ipv6,reg0=0x1/0x1,metadata=0x9, priority 100, cookie 0xe977b8b8
    ct(table=15,zone=NXM_NX_REG13[0..15])
    drop
     -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 15.
     -> Sets the packet to an untracked state, and clears all the conntrack fields.

Final flow: icmp6,reg0=0x1,reg11=0x3f,reg12=0x3d,reg13=0x3e,reg14=0x2,metadata=0x9,in_port=161,vlan_tci=0x0000,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,ipv6_src=fe80::18:59ff:fe37:204a,ipv6_dst=fe80::669d:99ff:fe3a:3d58,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=no,icmp_type=0,icmp_code=0
Megaflow: recirc_id=0,eth,icmp6,in_port=161,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,ipv6_src=fe80::18:59ff:fe37:204a,ipv6_dst=fe80::669d:99ff:fe3a:3d58,nw_ttl=0,nw_frag=no,icmp_type=0x0/0x80,nd_target=::,nd_tll=00:00:00:00:00:00
Datapath actions: ct(zone=62),recirc(0x23b8)

===============================================================================
recirc(0x23b8) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
===============================================================================

Flow: recirc_id=0x23b8,ct_state=new|trk,ct_zone=62,eth,icmp6,reg0=0x1,reg11=0x3f,reg12=0x3d,reg13=0x3e,reg14=0x2,metadata=0x9,in_port=161,vlan_tci=0x0000,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,ipv6_src=fe80::18:59ff:fe37:204a,ipv6_dst=fe80::669d:99ff:fe3a:3d58,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=no,icmp_type=0,icmp_code=0

bridge("br-int")
----------------
    thaw
        Resuming from table 15
15. ct_state=+new-est+trk,metadata=0x9, priority 7, cookie 0x94acb803
    set_field:0x80000000000000000000000000/0x80000000000000000000000000->xxreg0
    set_field:0x200000000000000000000000000/0x200000000000000000000000000->xxreg0
    resubmit(,16)
16. ipv6,reg0=0x80/0x80,reg14=0x2,metadata=0x9, priority 2002, cookie 0x4cdb3154
    set_field:0x2000000000000000000000000/0x2000000000000000000000000->xxreg0
    resubmit(,17)
17. metadata=0x9, priority 0, cookie 0x77e302aa
    resubmit(,18)
18. metadata=0x9, priority 0, cookie 0x97ee4db3
    resubmit(,19)
19. metadata=0x9, priority 0, cookie 0x6b46ef3d
    resubmit(,20)
20. metadata=0x9, priority 0, cookie 0x238074d5
    resubmit(,21)
21. metadata=0x9, priority 0, cookie 0x4b2f00cb
    resubmit(,22)
22. metadata=0x9, priority 0, cookie 0x1de1893e
    resubmit(,23)
23. metadata=0x9, priority 0, cookie 0x1b7c54a9
    resubmit(,24)
24. metadata=0x9, priority 0, cookie 0x91b808bf
    resubmit(,25)
25. metadata=0x9, priority 0, cookie 0x827a7c62
    resubmit(,26)
26. ipv6,reg0=0x2/0x2002,metadata=0x9, priority 100, cookie 0xf51cd562
    ct(commit,zone=NXM_NX_REG13[0..15],nat(src),exec(set_field:0/0x1->ct_mark))
    nat(src)
    set_field:0/0x1->ct_mark
     -> Sets the packet to an untracked state, and clears all the conntrack fields.
    resubmit(,27)
27. metadata=0x9, priority 0, cookie 0xe9561f7f
    resubmit(,28)
28. metadata=0x9, priority 0, cookie 0x426dc5bb
    resubmit(,29)
29. metadata=0x9, priority 0, cookie 0xeab289c
    resubmit(,30)
30. metadata=0x9, priority 0, cookie 0x620602c5
    resubmit(,31)
31. metadata=0x9, priority 0, cookie 0x5504e379
    resubmit(,32)
32. metadata=0x9, priority 0, cookie 0x5e1c22f5
    resubmit(,33)
33. metadata=0x9, priority 0, cookie 0x8233a381
    set_field:0->reg15
    resubmit(,71)
    71. No match.
            drop
    resubmit(,34)
34. reg15=0,metadata=0x9, priority 50, cookie 0x2dc6c0b8
    set_field:0x8001->reg15
    resubmit(,37)
37. priority 0
    resubmit(,39)
39. priority 0
    resubmit(,40)
40. reg15=0x8001,metadata=0x9, priority 100, cookie 0xa23e45f
    set_field:0x3->reg13
    set_field:0x1->reg15
    resubmit(,41)
    41. priority 0
            set_field:0->reg0
            set_field:0->reg1
            set_field:0->reg2
            set_field:0->reg3
            set_field:0->reg4
            set_field:0->reg5
            set_field:0->reg6
            set_field:0->reg7
            set_field:0->reg8
            set_field:0->reg9
            resubmit(,42)
        42. ipv6,reg15=0x1,metadata=0x9, priority 110, cookie 0x6ae0a674
            resubmit(,43)
        43. ipv6,reg15=0x1,metadata=0x9, priority 110, cookie 0x9147caee
            resubmit(,44)
        44. metadata=0x9, priority 0, cookie 0xcbd84a69
            resubmit(,45)
        45. ct_state=-trk,metadata=0x9, priority 5, cookie 0xec86b1c8
            set_field:0x100000000000000000000000000/0x100000000000000000000000000->xxreg0
            set_field:0x200000000000000000000000000/0x200000000000000000000000000->xxreg0
            resubmit(,46)
        46. metadata=0x9, priority 0, cookie 0x9ae00a32
            resubmit(,47)
        47. metadata=0x9, priority 0, cookie 0x98ca16da
            resubmit(,48)
        48. metadata=0x9, priority 0, cookie 0x7eb5b6c5
            resubmit(,49)
        49. metadata=0x9, priority 0, cookie 0x149995b7
            resubmit(,50)
        50. metadata=0x9, priority 0, cookie 0x9158534f
            set_field:0/0x1000->reg10
            resubmit(,75)
            75. No match.
                    drop
            move:NXM_NX_REG10[12]->NXM_NX_XXREG0[111]
             -> NXM_NX_XXREG0[111] is now 0
            resubmit(,51)
        51. metadata=0x9, priority 0, cookie 0xb046f48c
            resubmit(,64)
        64. priority 0
            resubmit(,65)
        65. reg15=0x1,metadata=0x9, priority 100, cookie 0xfed4d5d9
            push_vlan:0x8100
            set_field:4978->vlan_vid
            output:69

            bridge("br-ex")
            ---------------
                 0. priority 0
                    NORMAL
                     -> forwarding to learned port
            pop_vlan
    set_field:0x8001->reg15

Final flow: recirc_id=0x23b8,eth,icmp6,reg0=0x300,reg11=0x3f,reg12=0x3d,reg13=0x3,reg14=0x2,reg15=0x8001,metadata=0x9,in_port=161,vlan_tci=0x0000,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,ipv6_src=fe80::18:59ff:fe37:204a,ipv6_dst=fe80::669d:99ff:fe3a:3d58,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=0,nw_frag=no,icmp_type=0,icmp_code=0
Megaflow: recirc_id=0x23b8,ct_state=+new-est-rel-rpl-inv+trk,ct_mark=0/0x1,eth,icmp6,in_port=161,dl_src=02:18:59:37:20:4a,dl_dst=64:9d:99:3a:3d:58,ipv6_src=fe80::/10,ipv6_dst=fe80::669d:99ff:fe3a:3d58,nw_ttl=0,nw_frag=no,icmp_type=0x0/0x80
Datapath actions: ct(commit,zone=62,mark=0/0x1,nat(src)),push_vlan(vid=882,pcp=0),2

Tags: ipv6 ovn
tags: added: ipv6 ovn
Revision history for this message
Brian Haley (brian-haley) wrote :

So I wasn't able to reproduce this yet locally, although I've only been able to test on a private network which I assumed would show the same issue. But I figured I'd add a note with at least what I saw.

First, the OVS firewall will add a flow for the link-local address for IPv6 NA traffic, it was added in https://review.opendev.org/c/openstack/neutron/+/783743 which shows it backported to basically all releases.

First I tested with ML2/OVS and the OVS Firewall driver, testing was with devstack on the master branch (Bobcat). After booting a VM I saw the following flows for NA:

$ sudo ovs-appctl dpif/dump-flows br-int | grep 136 | grep fe80
recirc_id(0),in_port(9),ct_state(-trk),eth(src=fa:16:3e:aa:fc:06,dst=fa:16:3e:ae:2e:fe),eth_type(0x86dd),ipv6(src=fe80::f816:3eff:feaa:fc06,proto=58,frag=no),key32(00 00/00 00),icmpv6(type=136), packets:0, bytes:0, used:never, actions:6
recirc_id(0),in_port(9),ct_state(-trk),eth(src=fa:16:3e:aa:fc:06,dst=33:33:00:00:00:01),eth_type(0x86dd),ipv6(src=fe80::f816:3eff:feaa:fc06,proto=58,frag=no),key32(00 00/00 00),icmpv6(type=136), packets:2, bytes:172, used:9.348s, actions:push_vlan(vid=1,pcp=0),1,pop_vlan,4,5,6

So no actions=drop rules. Pings from the qrouter namespace to the LL address worked fine.

Then I tried with ML2/OVN, same setup. After booting a VM I saw the following flows for NA:

$ sudo ovs-ofctl dump-flows br-int | grep 136 | grep 35d9
 cookie=0x2467f8c7, duration=248.112s, table=9, n_packets=9, n_bytes=774, idle_age=136, priority=90,ipv6,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:6a:35:d9,ipv6_src=fe80::f816:3eff:fe6a:35d9 actions=resubmit(,10)
 cookie=0x0, duration=248.110s, table=10, n_packets=0, n_bytes=0, idle_age=248, priority=90,icmp6,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:6a:35:d9,nw_ttl=255,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:fe6a:35d9 actions=conjunction(1093703813,1/2)
 cookie=0x0, duration=248.110s, table=10, n_packets=0, n_bytes=0, idle_age=248, priority=90,icmp6,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:6a:35:d9,nw_ttl=255,icmp_type=136,icmp_code=0,nd_target=fd2c:af9f:6196:0:f816:3eff:fe6a:35d9 actions=conjunction(1093703813,1/2)
 cookie=0x9e65c598, duration=248.113s, table=48, n_packets=2, n_bytes=204, idle_age=136, priority=90,ipv6,reg15=0x4,metadata=0x1,dl_dst=fa:16:3e:6a:35:d9,ipv6_dst=fe80::f816:3eff:fe6a:35d9 actions=resubmit(,49)

Again, no actions=drop rules, and pings worked fine.

Things could be different on a provider network, when I booted a similar VM using a shared network it had other issues, just don't have time at the moment to debug that.

This was all running with OVN 22.03.2 from the Ubuntu cloud archives.

Revision history for this message
Tore Anderson (toreanderson) wrote :

Hi Brian, thanks for your reply.

I've done some more testing and it turns out I can reproduce it using two VMs instances on a common regular tenant network (Geneve encap). They're both in the same "default" security group, so all traffic between them should be allowed. The subnet object/ports does have GUA IPv6 prefix/ports assigned.

The symptoms are the same as for the VM on the VLAN provider network:

On the hypervisor hosting the VM with LLA fe80::f816:3eff:fe9b:a982:
--------------------------------------------------------------------

$ sudo ovs-ofctl dump-flows br-int | egrep 'icmp_type=136.*:(9a5e|a982)'
 cookie=0x8a1ff59b, duration=1423.329s, table=74, n_packets=0, n_bytes=0, idle_age=1423, priority=90,icmp6,reg14=0x9,metadata=0x26,nw_ttl=225,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:fe9b:a982,nd_tll=00:00:00:00:00:00 actions=load:0->NXM_NX_REG10[12]
 cookie=0x8a1ff59b, duration=1423.329s, table=74, n_packets=0, n_bytes=0, idle_age=1423, priority=90,icmp6,reg14=0x9,metadata=0x26,nw_ttl=225,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:fe9b:a982,nd_tll=fa:16:3e:9b:a9:82 actions=load:0->NXM_NX_REG10[12]

$ sudo ovs-appctl dpif/dump-flows br-int | grep 136 | grep fe80
recirc_id(0),in_port(5),eth(src=fa:16:3e:9b:a9:82),eth_type(0x86dd),ipv6(src=fe80::f816:3eff:fe9b:a982,proto=58,hlimit=255,frag=no),icmpv6(type=136,code=0),nd(target=fe80::f816:3eff:fe9b:a982,tll=00:00:00:00:00:00), packets:260, bytes:20280, used:5.170s, actions:drop

On the hypervisor hosting the VM with LLA fe80::f816:3eff:fead:9a5e:
--------------------------------------------------------------------

$ sudo ovs-ofctl dump-flows br-int | egrep 'icmp_type=136.*:(9a5e|a982)'
 cookie=0x89bf2021, duration=1434.895s, table=74, n_packets=0, n_bytes=0, idle_age=1434, priority=90,icmp6,reg14=0x8,metadata=0x26,nw_ttl=225,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:fead:9a5e,nd_tll=00:00:00:00:00:00 actions=load:0->NXM_NX_REG10[12]
 cookie=0x89bf2021, duration=1434.895s, table=74, n_packets=0, n_bytes=0, idle_age=1434, priority=90,icmp6,reg14=0x8,metadata=0x26,nw_ttl=225,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:fead:9a5e,nd_tll=fa:16:3e:ad:9a:5e actions=load:0->NXM_NX_REG10[12]

$ sudo ovs-appctl dpif/dump-flows br-int | grep 136 | grep fe80
recirc_id(0),in_port(5),eth(src=fa:16:3e:ad:9a:5e),eth_type(0x86dd),ipv6(src=fe80::f816:3eff:fead:9a5e,dst=fe80::/ffff::,proto=58,hlimit=255,frag=no),icmpv6(type=136,code=0),nd(target=fe80::f816:3eff:fead:9a5e,tll=00:00:00:00:00:00), packets:260, bytes:20280, used:4.587s, actions:drop

'ndisc6 ${LLA_OF_VM1} eth0' on VM2 and vice versa consistently fails. 'tcpdump -ni eth0 icmp6' on the target VM shows the NS towards the solicited-node multicast address show up and be answered with an NA, but the latter does not show up in a corresponding tcpdump session on the source VM.

I see that I am running a newer OVN version (23.03) than you do (22.03.2) which I suppose might be relevant.

I'll do some more digging and let you know if I find out anything else.

Revision history for this message
yaoguang (yaoguang100) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :
Download full text (10.4 KiB)

I figured out how to do some more tracing/debugging of the vanishing NA packets, and it seems to be a mismatch between what actually happens in OVS (drop) and what OVN thinks should happening (forwarding). I'll walk you through it below.

1) First I capture an ICMPv6 NA packet that is dropped from the :a982 VM:

$ tcpdump -XXvpnei tap8bdc141e-fe -Q in -c 1 icmp6[0] = 136
dropped privs to tcpdump
tcpdump: listening on tap8bdc141e-fe, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:22:08.178527 fa:16:3e:9b:a9:82 > fa:16:3e:ad:9a:5e, ethertype IPv6 (0x86dd), length 78: (hlim 255, next-header ICMPv6 (58) payload length: 24) fe80::f816:3eff:fe9b:a982 > fe80::f816:3eff:fead:9a5e: [icmp6 sum ok] ICMP6, neighbor advertisement, length 24, tgt is fe80::f816:3eff:fe9b:a982, Flags [solicited]
        0x0000: fa16 3ead 9a5e fa16 3e9b a982 86dd 6000 ..>..^..>.....`.
        0x0010: 0000 0018 3aff fe80 0000 0000 0000 f816 ....:...........
        0x0020: 3eff fe9b a982 fe80 0000 0000 0000 f816 >...............
        0x0030: 3eff fead 9a5e 8800 ad9d 4000 0000 fe80 >....^....@.....
        0x0040: 0000 0000 0000 f816 3eff fe9b a982 ........>.....
1 packet captured
1 packet received by filter
0 packets dropped by kernel

2) Then I determine the ofport attribute for the VM-facing tap interface:

$ ovs-vsctl get Interface tap8bdc141e-fe ofport
15

3) Using the packet hex string + the ofport attribute found above, I can now do a OVS trace, which can then be run through OVN detrace to be annotated with additional information:

$ ovs-appctl ofproto/trace br-int in_port=15 fa163ead9a5efa163e9ba98286dd6000000000183afffe80000000000000f8163efffe9ba982fe80000000000000f8163efffead9a5e8800ad9d40000000fe80000000000000f8163efffe9ba982 | ovn-detrace (…)
Flow: icmp6,in_port=15,vlan_tci=0x0000,dl_src=fa:16:3e:9b:a9:82,dl_dst=fa:16:3e:ad:9a:5e,ipv6_src=fe80::f816:3eff:fe9b:a982,ipv6_dst=fe80::f816:3eff:fead:9a5e,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=255,nw_frag=no,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:fe9b:a982,nd_sll=00:00:00:00:00:00,nd_tll=00:00:00:00:00:00

bridge("br-int")
----------------
0. in_port=15, priority 100, cookie 0x8a1ff59b
set_field:0x46->reg13
set_field:0x45->reg11
set_field:0x44->reg12
set_field:0x26->metadata
set_field:0x9->reg14
resubmit(,8)
  * Logical datapath: "neutron-7f71277c-aae0-4037-8392-2398a8be9929" (9c1b45e9-0135-4301-b7a8-4281e843e306)
  * Port Binding: logical_port "8bdc141e-fed6-4100-bf80-95a36c8a9b97", tunnel_key 9, chassis-name "d48404f8-e1b9-5184-bdda-bd7ce0ca1d58", chassis-str "node08"
8. metadata=0x26, priority 50, cookie 0x59f248ee
set_field:0/0x1000->reg10
resubmit(,73)
  * Logical datapaths:
  * "neutron-b7a0e814-603b-4a7e-9bc2-c5a14752fcbf" (0486544d-1050-408d-936f-aaaa96c71df7) [ingress]
  * "neutron-4f34ef66-41ad-4393-a0e7-c46981d55740" (0ef0124f-4c01-481b-9bd8-e0cbdd60883b) [ingress]
  * "neutron-50ba6904-3111-43a3-8fe4-d2bb6d6d120c" (14741cc3-fe87-4605-b17a-72db8c9f46f9) [ingress]
  * "neutron-75c4c616-b079-42f2-bf62-c4049a16ac31" (155bf97f-55d0-40f3-9246-731983eb9fc5) [ingress]
  * "neutron-54578d02-3847-442c-adc0-1a7ad144ef05" (25e90264-92e7-4941-aafa-96b7f08...

Changed in neutron:
importance: Undecided → Medium
Changed in neutron:
assignee: nobody → Brian Haley (brian-haley)
Revision history for this message
Brian Haley (brian-haley) wrote :

Hi Tore,

Sorry for the slow response just busy with a lot of things. Thanks for reproducing this with a tenant network, make it easier to debug (for me).

I will try and reproduce this with a later OVN version, should be able to trigger a build from source in devstack. But I am still curious why you have a 'drop' rule as I don't see neutron adding one, unless ovn-controller is doing that.

-Brian

Revision history for this message
Tore Anderson (toreanderson) wrote :

Hi Brian, no worries, I'm just glad someone is taking a look at all!

I have some good news: the issue is reproducible using Devstack branch stable/2023.1 on Rocky 9. I have a such a Devstack instance up and running right now that exhibits the issue, where the two VMs cannot resolve each others' LLAs because the NAs are dropped.

When Devstack is running on Ubuntu 22.04, on the other hand, it works fine, both using the stable/2023.1 and master branches.

I have not been able to test Devstack master on Rocky 9 due to bug #2031639.

I would be happy to give you access to all these Devstack instances, so you can see it for yourself. If you would like that, just send me an SSH pubkey and I will add it to the authorized_keys files. (You could also send it to me on IRC - I'm tore@#openstack-neutron.)

Tore

Revision history for this message
Tore Anderson (toreanderson) wrote :

A workaround for bug #2031639 was found, so now I also have a Devstack master testbed on Rocky 9. It also exhibits this bug. So to summarise my testing:

Platform | Devstack version | Neutron version | OVN version | Exhibits bug
Ubuntu-22.04 | stable/2023.1 | 22.0.3.dev43 | 22.03.2-0ubuntu0.22.04.1 | No
Ubuntu-22.04 | master | 23.0.0.0b3.dev261 | 22.03.2-0ubuntu0.22.04.1 | No
Rocky-9 | stable/2023.1 | 22.0.3.dev43 | 23.03.0-69.el9s | Yes
Rocky-9 | master | 23.0.0.0b3.dev263 | 23.03.0-69.el9 | Yes

The offer to give you access to all of the above testbeds stands, I'll leave them running for now.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/neutron/+/892012

Revision history for this message
Brian Haley (brian-haley) wrote :

So I hate to say, but only being broken on Rocky 9 with their OVN packages points at a distro issue.

Is Rocky 9 close to Centos 9? If so we can create a test patch and run the experimental jobs, there is a neutron-ovn-tempest-ovs-master-centos-9-stream job that will build OVS and OVN from source on Centos 9.

https://review.opendev.org/c/openstack/neutron/+/892012

Revision history for this message
Tore Anderson (toreanderson) wrote :

We're seeing it on AlmaLinux-9 as well (our production rig), so it is definitively not a Rocky-9 specific issue. It is possible that it comes from RHEL-9 (both AlmaLinux and Rocky are RHEL clones). Both AlmaLinux and Rocky are related to CentOS-9, as CentOS-9 is upstream for RHEL-9.

The RPM packages for all of the above comes from https://www.rdoproject.org/.

I personally believe it is more likely that it is an upstream OVN issue, and that the reason it does not (yet!) occur on Ubuntu-22.04 is that the Deb packages it uses contain an older OVN version than the RPMs Rocky/AlmaLinux/RHEL/CentOS provided by RDO Project.

Revision history for this message
Brian Haley (brian-haley) wrote :

Yeah, it might be a but in OVN.

I found the changelog for the code for 23.03 in Centos:

https://cbs.centos.org/koji/buildinfo?buildID=49874

And looking at that, Ihar made some changes in the IPv6 code regarding stateless security groups, I wonder if something got broken? I pinged him on irc to see if he had any thoughts on it, his nick is ihrachys.

The experimental pipeline is currently broken trying to build OVN and OVS from source, so testing manually seems the only way at the moment.

Revision history for this message
Tore Anderson (toreanderson) wrote :

Nailed it:

[rocky@devstack-2 ovn]$ git bisect bad
8cab00bdb581f714dcfdc4fb08affe0319c211a6 is the first bad commit
commit 8cab00bdb581f714dcfdc4fb08affe0319c211a6
Author: Numan Siddique <email address hidden>
Date: Thu May 19 11:17:39 2022 -0400

    ovn-controller: Add OF rules for port security.

    ovn-controller will now generate OF rules for in port security and
    out port security checks in OF tables - 73, 74 and 75. These flows
    will be added if a port binding has port security defined in the
    Port_Binding.Port_Security column which is newly added in this patch.

    The idea of this patch is to program these OF rules directly within
    the ovn-controller instead of ovn-northd generating logical flows.
    This helps in reducing the numnber of logical flows overall in the
    Southbound database.

    Upcoming patches will add the necessary OVN actions which ovn-northd
    can make use of.

    Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2078927
    Suggested-by: Dumitru Ceara <email address hidden>
    Acked-by: Mark Michelson <email address hidden>
    Signed-off-by: Numan Siddique <email address hidden>
    Signed-off-by: Mark Michelson <email address hidden>

 controller/binding.c | 78 ++++-
 controller/binding.h | 23 +-
 controller/lflow.c | 792 ++++++++++++++++++++++++++++++++++++++++++-
 controller/lflow.h | 4 +
 controller/ovn-controller.c | 21 +-
 include/ovn/actions.h | 4 +
 include/ovn/logical-fields.h | 1 +
 ovn-sb.ovsschema | 7 +-
 ovn-sb.xml | 15 +
 tests/ovn.at | 288 ++++++++++++++++
 10 files changed, 1199 insertions(+), 34 deletions(-)

Reproduced and bisected this using devstack master (21eac99e4e342108d7905f64c3e5474b70c9273f) on Rocky 9, then building and installing OVN and OVS from source into /opt/ovn and overriding the ExecStop/ExecStart settings in ovn-controller/ovn-northd.service to use /opt/ovn/share/ovn/scripts/ovn-ctl instead of the default from the RDO RPM.

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

@Tore, thanks for bisecting! I was actually looking at the exact same commit just now because it seems like the potential culprit of the regression in 23.03 from just reading the code. So - good coincidence!

Could you please share `dump-flows br-int` for the good and bad trees of OVN? We can then compare the exact tables where port_security is implemented to see where the difference hides.

Thanks in advance.

I will also try to pull Numan here since he's the author of the commit and may have an idea what happens. (I need to check when he's back in office though.)

Revision history for this message
Tore Anderson (toreanderson) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :

I've now attached output from dump-flows from the first bad commit (8cab00bdb) and from the last good commit (c0224a14f). While the dumps were performed, the two VMs were constantly trying soliciting each others link-local and ULA addresses.

If you want access to the test VM, just send me an SSH public key.

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

@Hi Tore,

regarding your bisect, I think the exact commit you identified may expose failing behavior because it lacks the full implementation of the feature, so it doesn't create any flows in table=7[3-5] at all.

Could you please repeat your dump-flows with commits included up to 94974a02bfd5b24b89a6a7471133730fd7fa1356? These will show the final set of flows as created by ovn-controller (and not through Logical_Flows).

Thank you!

Revision history for this message
Tore Anderson (toreanderson) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :
Revision history for this message
Tore Anderson (toreanderson) wrote :

Certainly, @Ihar. I've attached updated dumps now. The observed behaviour is the same as with 8cab00bdb.

Revision history for this message
Tore Anderson (toreanderson) wrote (last edit ):

For what it's worth, the contents of the tables you mentioned appear to be identical too (when stripping out the duration, packet and byte counters:

[tore:/tmp] $ sed -n '/table=7\(3\|4\|5\)/s/, duration=.*, priority=//p' ovs-ofctl_dump-flows_br-int.8cab00bdb.txt | md5sum
0eecf2a4f76f6ee954153f4764964ad6 -
[tore:/tmp] $ sed -n '/table=7\(3\|4\|5\)/s/, duration=.*, priority=//p' ovs-ofctl_dump-flows_br-int.94974a02b.txt | md5sum
0eecf2a4f76f6ee954153f4764964ad6 -

It is only the last good commit that doesn't create any flows in those tables:

[tore:/tmp] $ grep -c table=7 ovs-ofctl_dump-flows_br-int.c0224a14f.txt
0

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron (master)

Change abandoned by "Brian Haley <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/neutron/+/892012
Reason: Tore found the change that broke this in OVN

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

Could you please attach the relevant Logical_Flow records from NB for the affected port? And also, while at it, please show the output of `openstack port show` and `ovn-nbctl list Logical_Switch_Port` and `ovn-sbctl list Port_Binding` for the relevant ports. Thank you!

I wonder if you have relevant port security Logical_Flows created for the port in SB database...

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

Nevermind, I think I know what's going on, I've found a bug in the new table=7* flows: the higher priority flows that are supposed to allow NAs for the ipv6 addresses match against nw_ttl=225 instead of nw_ttl=255, so legit NAs are not matching and fall back down to the default drop flow.

I will post the fix upstream this week. I will link it here.

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

Fix: https://patchwork<email address hidden>/

Tore, if you can validate it helps, I would appreciate it.

Revision history for this message
Tore Anderson (toreanderson) wrote :

Good catch! Matching on TTL=225 is clearly wrong.

However, I tried applying your patch on top of current main branch, but the nw_ttl=225 flows still show up in the output of ovs-ofctl dump-flows br-int anyway, and it still does not work.

Reading the OVN code I do not understand how this could be, unless these nw_ttl=225 flows are persistently cached somewhere and thus survive upgrades and restarts of ovn-controller and ovn-northd. (Which would be a problem for the production rig as well.)

In trying to figure this out and make sure I started over completely from scratch, I've managed to completely hose my devstack...now the OVN services are complaining about «X table in OVN_{North,South}bound database lacks Y column (database needs upgrade?)» and «OVN_{North,South}bound database lacks X table». Tried to recover from this with

sudo /opt/ovn/bin/ovsdb-tool create /var/lib/ovn/ovnnb_db.db /opt/ovn/share/ovn/ovn-nb.ovsschema
sudo /opt/ovn/bin/ovsdb-tool create /var/lib/ovn/ovnsb_db.db /opt/ovn/share/ovn/ovn-sb.ovsschema

…but no cigar. I'll see if I can figure out how to recover, if not I'll try to re-establish a brand new devstack and see if I can make that work.

Revision history for this message
Tore Anderson (toreanderson) wrote (last edit ):

Okay, figured out what was going wrong. Even though I had installed OVN into /opt/ovn, and started it using "/opt/ovn/share/ovn/scripts/ovn-ctl start_{controller,northd}", the ovn-ctl script went on to starting the OVN binaries installed to /usr/bin instead for some dumb reason. After moving the /usr/bin binaries out of the way, ovn-ctl started the right binaries from /opt/ovn instead.

That made me able to confirm that the patch works as expected. With it, the nw_ttl=225 flows are replaced with nw_ttl=255 ones, and IPv6 neighbour discovery between the VMs works again. Great job!

By the way, I think this patch should be cherry-picked onto the stable branches that contain 8cab00bdb581 ("ovn-controller: Add OF rules for port security.").

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

Yes, the patch will be cherry-picked into all affected (supported) branches as part of the master merge, by whoever is merging it in master.

Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

And thanks for confirming the patch works. I don't know what the procedure for the LP bug for neutron would be, considering that the bug is in OVN. Do we close it?

Revision history for this message
Brian Haley (brian-haley) wrote :

Let's keep it open until the patch merges and we can list the OVN versions that have it. It's already assigned to me so can update then.

Changed in neutron:
status: New → In Progress
Revision history for this message
Ihar Hrachyshka (ihar-hrachyshka) wrote :

From Mark

```
Thanks Ales and Ihar,

I applied this change to all branches from main back to 22.09.

```

This will be fixed in next upstream releases.

Revision history for this message
Brian Haley (brian-haley) wrote :

Closing as it's fixed in OVN tree.

Changed in neutron:
status: In Progress → Fix Committed
Changed in neutron:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.