last fragment of the pkt is not fwded if intf mirror is configured

Bug #1718844 reported by Senthilnathan Murugappan
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Won't Fix
High
Divakar Dharanalakota
R4.0
Won't Fix
High
Divakar Dharanalakota
Trunk
Won't Fix
High
Divakar Dharanalakota

Bug Description

Last fragment of the pkt is not fwded if intf mirror is configured

When the VM sends fragmented pkts then the last fragment is not sent out by the vrouter. This happens only if interface mirroring is configured.

Intf mirror config:
virtual_machine_interface_properties: {
  local_preference: null
  interface_mirror: {
    traffic_direction: ingress
    mirror_to: {
      analyzer_name: mirror-vn2-vm1
      nh_mode: dynamic
      juniper_header: true
      udp_port: 8099
      static_nh_header: null
      analyzer_ip_address: 112.12.2.3
      analyzer_mac_address: 02:e1:70:67:73:a1
      nic_assisted_mirroring: false
    }
  }
}

Sent 5500 byte pkt from a VM with default (1500) MTU.

tcpdump on tap interface:
root@5b8s35:~# tcpdump -i tapff434d27-8e -n ip and greater 500 -v
tcpdump: WARNING: tapff434d27-8e: no IPv4 address assigned
tcpdump: listening on tapff434d27-8e, link-type EN10MB (Ethernet), capture size 65535 bytes
21:58:28.557091 IP (tos 0x0, ttl 64, id 53299, offset 0, flags [+], proto ICMP (1), length 1500)
    112.12.1.3 > 112.12.1.4: ICMP echo request, id 28080, seq 1, length 1480
21:58:28.557145 IP (tos 0x0, ttl 64, id 53299, offset 1480, flags [+], proto ICMP (1), length 1500)
    112.12.1.3 > 112.12.1.4: ip-proto-1
21:58:28.557158 IP (tos 0x0, ttl 64, id 53299, offset 2960, flags [+], proto ICMP (1), length 1500)
    112.12.1.3 > 112.12.1.4: ip-proto-1
21:58:28.557170 IP (tos 0x0, ttl 64, id 53299, offset 4440, flags [none], proto ICMP (1), length 1088)
    112.12.1.3 > 112.12.1.4: ip-proto-1

tcpdump on physical interface:
root@5b8s35:~# tcpdump -i p514p1 -n host 172.17.90.7 and udp -v

22:02:23.315496 IP (tos 0x0, ttl 64, id 53300, offset 0, flags [none], proto UDP (17), length 1486)
    172.17.90.6.50257 > 172.17.90.7.4789: VXLAN, flags [I] (0x08), vni 6174
IP (tos 0x0, ttl 64, id 53300, offset 0, flags [+], proto ICMP (1), length 1436)
    112.12.1.3 > 112.12.1.4: ICMP echo request, id 28551, seq 1, length 1416
22:02:23.315503 IP (tos 0x0, ttl 64, id 53300, offset 0, flags [none], proto UDP (17), length 134)
    172.17.90.6.50257 > 172.17.90.7.4789: VXLAN, flags [I] (0x08), vni 6174
IP (tos 0x0, ttl 64, id 53300, offset 1416, flags [+], proto ICMP (1), length 84)
    112.12.1.3 > 112.12.1.4: ip-proto-1
22:02:23.315511 IP (tos 0x0, ttl 64, id 53300, offset 0, flags [none], proto UDP (17), length 1486)
    172.17.90.6.50257 > 172.17.90.7.4789: VXLAN, flags [I] (0x08), vni 6174
IP (tos 0x0, ttl 64, id 53300, offset 1480, flags [+], proto ICMP (1), length 1436)
    112.12.1.3 > 112.12.1.4: ip-proto-1
22:02:23.315517 IP (tos 0x0, ttl 64, id 53300, offset 0, flags [none], proto UDP (17), length 134)
    172.17.90.6.50257 > 172.17.90.7.4789: VXLAN, flags [I] (0x08), vni 6174
IP (tos 0x0, ttl 64, id 53300, offset 2896, flags [+], proto ICMP (1), length 84)
    112.12.1.3 > 112.12.1.4: ip-proto-1
22:02:23.315522 IP (tos 0x0, ttl 64, id 53300, offset 0, flags [none], proto UDP (17), length 1486)
    172.17.90.6.50257 > 172.17.90.7.4789: VXLAN, flags [I] (0x08), vni 6174
IP (tos 0x0, ttl 64, id 53300, offset 2960, flags [+], proto ICMP (1), length 1436)
    112.12.1.3 > 112.12.1.4: ip-proto-1
22:02:23.315524 IP (tos 0x0, ttl 64, id 53300, offset 0, flags [none], proto UDP (17), length 134)
    172.17.90.6.50257 > 172.17.90.7.4789: VXLAN, flags [I] (0x08), vni 6174
IP (tos 0x0, ttl 64, id 53300, offset 4376, flags [+], proto ICMP (1), length 84)
    112.12.1.3 > 112.12.1.4: ip-proto-1

Tags: vrouter
Revision history for this message
Jeba Paulaiyan (jebap) wrote :

Looks like the issue is not a recent breakage. This was exposed now in the virtual overlay testbeds we have in Sanity. VM-VM traffic will get affected if we have mirroring and fragments. As customers did not hit this in previous releases I am moving it to get fixed in 4.0.2.0

Revision history for this message
Divakar Dharanalakota (ddivakar) wrote :

This is because all the fragments of the packet come to Vrouter before flow processing. When the head fragment is received, we hold the packet in holdq and trap it to Agent for flow action. If remaining fragments from VM are received earlier than Agent's processing of Flow action, we continue to hold the packets in Holdq. As we hold only 3 packets in Holdq, fragments after the 3rd fragment are dropped.

This bug priority can be reduced for the following reasons

1) For TCP packets it does not happen, as it is conneciton oriented and we never receive more than one before marking flow as Forward
2) For UDP packets also it does not happen as we enable "UFO" udp fragmentation offload processing to Vrouter from VM.
2) For IP packets with "DF" bit set also it does not happen, as Vrouter sends ICMP error to VM to reduce the packet size.

It happens only for the packets where VM already does the fragmentation, which is more of negative testcase and is not likely scenaio.

-Divakar

Changed in juniperopenstack:
status: New → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.