tx checksumming offload results in TCP/UDP packet drops (was Octavia amphora loadbalancer gets stuck at PENDING_CREATE status)

Bug #1983468 reported by Marcelo Subtil Marcal
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Octavia Charm
Invalid
Undecided
Unassigned
linux (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

In a new focal-yoga deployment, the creation of a loadbalancer gets stuck at PENDING_CREATE status.

Checking the amphora we could see that it stays at BOOTING status:

$ openstack loadbalancer amphora show ef48089d-ba40-46db-92e8-e369f764f017 --format yaml
id: ef48089d-ba40-46db-92e8-e369f764f017
loadbalancer_id: dcd17d9e-6a27-43c5-9c3f-eb2b2655556d
compute_id: 5efe11a8-93d8-4278-94c2-4efc8b015009
lb_network_ip: fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa
vrrp_ip: null
ha_ip: null
vrrp_port_id: null
ha_port_id: null
cert_expiration: '2022-09-01T20:27:05'
cert_busy: false
role: null
status: BOOTING
vrrp_interface: null
vrrp_id: null
vrrp_priority: null
cached_zone: nova
created_at: '2022-08-02T20:27:05'
updated_at: '2022-08-02T20:30:13'
image_id: 6c6cd911-197f-45d3-a6d5-4ff1789d4ee7
compute_flavor: 638fa4c5-e81b-438f-a12b-1ef7faf81c3e

/var/log/octavia/octavia-worker.log shows several warnings about connection failure to the amphora.

2022-08-02 20:30:22.589 149659 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa', port=9443): Max retries exceeded with url: // (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f87a655fc70>, 'Connection to fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa timed out. (connect timeout=10.0)'))

It is possible to ping the amphora from an octavia unit:

# ping -M do -s 1452 fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa
PING fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa(fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa) 1452 data bytes
1460 bytes from fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=1 ttl=64 time=2.45 ms
1460 bytes from fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=2 ttl=64 time=1.01 ms
1460 bytes from fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=3 ttl=64 time=0.532 ms
1460 bytes from fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=4 ttl=64 time=0.417 ms

Also, the port tcp/22 is reacheable from the octavia unit:

# telnet fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa 22
Trying fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa...
Connected to fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa.
Escape character is '^]'.
SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5

After running the config-changed hook as described at the bug https://bugs.launchpad.net/charm-octavia/+bug/1961088 , the creation of a loadbalancer ends with a ERROR provisioning_status.

summary: - Octavia amphora loadbalancer gets stuck at PENDING_CREATE status In a
- new focal-yoga deployment, the creation of a loadbalancer gets stuck at
- PENDING_CREATE status. Checking the amphora we could see that it stays
- at BOOTING status: $ openstack loadbalancer amphora show
- ef48089d-ba40-46db-92e8-e369f764f017 --format yaml id:
- ef48089d-ba40-46db-92e8-e369f764f017 loadbalancer_id:
- dcd17d9e-6a27-43c5-9c3f-eb2b2655556d compute_id:
- 5efe11a8-93d8-4278-94c2-4efc8b015009 lb_network_ip:
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa vrrp_ip: null ha_ip: null
- vrrp_port_id: null ha_port_id: null cert_expiration:
- '2022-09-01T20:27:05' cert_busy: false role: null status: BOOTING
- vrrp_interface: null vrrp_id: null vrrp_priority: null cached_zone: nova
- created_at: '2022-08-02T20:27:05' updated_at: '2022-08-02T20:30:13'
- image_id: 6c6cd911-197f-45d3-a6d5-4ff1789d4ee7 compute_flavor:
- 638fa4c5-e81b-438f-a12b-1ef7faf81c3e /var/log/octavia/octavia-
- worker.log shows several warnings about connection failure to the
- amphora. 2022-08-02 20:30:22.589 149659 WARNING
- octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
- to instance. Retrying.: requests.exceptions.ConnectTimeout:
- HTTPSConnectionPool(host='fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa',
- port=9443): Max retries exceeded with url: // (Caused by
- ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object
- at 0x7f87a655fc70>, 'Connection to
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa timed out. (connect
- timeout=10.0)')) It is possible to ping the amphora from an octavia
- unit: # ping -M do -s 1452 fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa PING
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa(fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa)
- 1452 data bytes 1460 bytes from fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa:
- icmp_seq=1 ttl=64 time=2.45 ms 1460 bytes from
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=2 ttl=64 time=1.01 ms
- 1460 bytes from fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=3
- ttl=64 time=0.532 ms 1460 bytes from
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa: icmp_seq=4 ttl=64 time=0.417 ms
- Also, the port tcp/22 is reacheable from the octavia unit: # telnet
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa 22 Trying
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa... Connected to
- fc00:b81a:629a:59a6:f816:3eff:fe0a:68fa. Escape character is '^]'.
- SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5 After running the config-
- changed hook as described at the bug https://bugs.launchpad.net/charm-
- octavia/+bug/1961088 , the creation of a loadbalancer ends with a ERROR
- provisioning_status.
+ Octavia amphora loadbalancer gets stuck at PENDING_CREATE status
Revision history for this message
Marcelo Subtil Marcal (msmarcal) wrote : Re: Octavia amphora loadbalancer gets stuck at PENDING_CREATE status

subscribed ~field-critical

Revision history for this message
Marcelo Subtil Marcal (msmarcal) wrote :

It seems that the issue is different from the LP #1914179.

This deployment is using OVN instead of OVS and already has two ips on o-hm0 interface.

Revision history for this message
Przemyslaw Hausman (phausman) wrote :
Download full text (3.5 KiB)

I captured the traffic on the compute node hosting the amphora instance in two places:

1. On the tap interface of the amphora instance (as seen in virsh dumpxml). Note that only a single tap interface is found in `virsh dumpxml`, wheras on the healthy system (on different deployment) I can see two tap interfaces in the amphora instance.

2. On the genev_sys_6081 interface of the compute node.

When trying to ssh from the octavia/0 unit to amphora instance, I noticed that it takes dozens of seconds for some packets to travel from genev_sys_6081 to tap interface (via br-int). E.g the packet with ssh client version travels via br-int for ~26 seconds, which it far too long. See the capture session below

```
# Packet capture on genev_sys_6081

ubuntu@sdtpdc41s100020:~$ sudo tshark -i genev_sys_6081 -ta -Y 'ssh.protocol == "SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5" and ssh.direction == 0'
Running as user "root" and group "root". This could be dangerous.
Capturing on 'genev_sys_6081'
    27 13:36:39.826423031 fc00:4b6d:59b1:234e:f816:3eff:fec7:72dd → fc00:4b6d:59b1:234e:f816:3eff:fe45:bb84 SSH 127 Client: Protocol (SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5)

# Packet capture on tap8a2619aa-70

ubuntu@sdtpdc41s100020:~$ sudo tshark -i tap8a2619aa-70 -ta -Y 'ssh.protocol == "SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5" and ssh.direction == 0'
Running as user "root" and group "root". This could be dangerous.
Capturing on 'tap8a2619aa-70'
   27 13:37:06.477373775 fc00:4b6d:59b1:234e:f816:3eff:fec7:72dd → fc00:4b6d:59b1:234e:f816:3eff:fe45:bb84 SSH 127 Client: Protocol (SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5)
```

There's a lot of retransmissions visible in the network capture file too. Eventually, SSH server resets the connection -- I believe it is because of missing or much delayed packets.

I have pushed this particular packet through the ovn-detrace and noticed the "drop" result, see below.

```
ubuntu@juju-901db8-4-lxd-18:~$ sudo ovn-detrace --ovnsb=$SBDB --ovnnb=$NBDB -c /etc/ovn/cert_host -C /etc/ovn/ovn-central.crt -p /etc/ovn/key_host < trace_output
Flow: tcp6,in_port=12,vlan_tci=0x0000,dl_src=fa:16:3e:c7:72:dd,dl_dst=fa:16:3e:45:bb:84,ipv6_src=fc00:4b6d:59b1:234e:f816:3eff:fec7:72dd,ipv6_dst=fc00:4b6d:59b1:234e:f816:3eff:fe45:bb84,ipv6_label=0x80888,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=34162,tp_dst=22,tcp_flags=psh|ack

bridge("br-int")
----------------
0. in_port=12, priority 100, cookie 0x7a2f4a4a
set_field:0x3->reg13
set_field:0x4->reg11
set_field:0x5->reg12
set_field:0x1->metadata
set_field:0x8->reg14
resubmit(,8)
  * Logical datapath: "neutron-d2567074-3038-464b-95dd-4b6d59b1234e" (fbc27e98-b47c-455f-937e-3bc7cf5cd70a)
  * Port Binding: logical_port "8a2619aa-70e5-49ba-b33e-83421fd91a97", tunnel_key 8, chassis-name "sdtpdc41s100020.ndc-us.corpintra.net", chassis-str "sdtpdc41s100020.ndc-us.corpintra.net"
8. No match.
drop
```

Also noticed that all the octavia-health-manager ports are DOWN:

```
ubuntu@sdtpdc41s100001:~/deploy$ openstack port list -c Name -c fixed_ips -c Status | grep octavia-health
| octavia-health-manager-octavia-1-listen-port | ip_address='fc00:4b6d:59b1:234e:f816:3eff:fe54:8ec4', subnet_id='c3adb2e7-d869-47c9-bfb6-850...

Read more...

Revision history for this message
Przemyslaw Hausman (phausman) wrote :

Also please find attached pcap files from another capture session. I tried to ssh from octavia to amphora and captured the traffic in 4 different places:

1. On the veth interface of the LXC container of octavia unit (octavia-veth.pcap)
2. On the VLAN interface of the Control Node hosting the LXC with octavia unit (octavia-vlan.pcap)
3. On the VLAN interface of the Compute Node hosting amphora instance (amphora-vlan.pcap)
4. On the tap interface of the amphora instance, as seen in `virsh dumpxml` (amphora-tap.pcap)

Revision history for this message
Przemyslaw Hausman (phausman) wrote :
Revision history for this message
Przemyslaw Hausman (phausman) wrote :
Revision history for this message
Przemyslaw Hausman (phausman) wrote :
Revision history for this message
Przemyslaw Hausman (phausman) wrote :
Revision history for this message
Nobuto Murata (nobuto) wrote :

JFYI:

I don't think it's tightly related, but another instance we saw pending_create->error in a separate environment was because of:

2022-08-05 11:21:02.817 959382 ERROR oslo_messaging.rpc.server taskflow.exceptions.WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Bad decrypt. Incorrect password?, Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Bad decrypt. Incorrect password?]

Which means there was a mismatch between Amphora CA and passphrase of the CA private key.

Revision history for this message
Przemyslaw Hausman (phausman) wrote :

Here is the exported juju bundle and juju-crashdump output: https://docs.google.com/document/d/1Wblm4TPUBezMJ08-vzioinQ4zS6QZP282_Ffaz2nuAo/

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

@phausman: as the bug is public, the bundle and crashdump aren't going to be visible to outside contributors. Unless they are sensitive, then please could they be added directly to the bug?

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Synced with Przemyslaw on the issue.

The deployment is disaggregated so amphorae reside on the nodes different from the nodes with Octavia units themselves.

The packet captures seem to indicate delayed packets so the plan is to run additional tests from instances collocated with amphorae or running on different compute nodes to see if the underlying network could be a problem.

There were some prior tests done to assess the underlying network but they did not immediately point out an issue that is causing this.

Przemyslaw could not also reproduce it in his lab environment. I am attempting to do a reproducer as well while waiting for more test results from the actual environment.

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Ran some additional tests with Przemyslaw, the following can be observed:

1) ssh from octavia to a VM on lb-mgmt-net: the issue IS present
2) ssh between 2 VMs attached to lb-mgmt-net, both located on the same compute node: no issue
3) ssh from a VM to an amphora instance: both attached to lb-mgmt-net and both located on the same compute nodes: no issue
4) ssh from a VM to an amphora instance, both attached to lb-mgmt-net, each VM hosted on different compute nodes: no issue
5) ssh between different octavia units over the tunnel network (via o-hm0 ports): the issue is present
6) iperf3 (TCP or UDP) between different octavia units over the tunnel network (via o-hm0 ports): the issue is present
  TCP SYN/SYN-ACK/ACK seems to work but the bandwidth is reported as almost zero.
7) iperf3 between different octavia units over the "overlay" space network: the issue is not present
8) an ICMP test with 100k packet flood (packet size close to standard MTU): the issue is not present (no packets lost).

So far it seems to be happening on the tunnel network only and only when control nodes are in the mix.

We need to narrow this down further, probably with more OVS/OVN level tracing and packet flow simulations.

The original issue looks like the result of impaired transport connectivity since UDP health checks are effected too.

Revision history for this message
Przemyslaw Hausman (phausman) wrote :

It turns out that the issue is going away as soon as I disable TCP Segmentation offloading on the NICs on Control and Compute nodes with `sudo ethtool -K $nic tx off sg off tso off`. I'll run some more tests and see if `tx` and `sg` can be left `on`.

Revision history for this message
Przemyslaw Hausman (phausman) wrote :

The problem is present with the following settings:
- tx on sg on tso off
- tx on sg off tso off
- tx on sg on tso on

Loadbalancers are successfully created with the follwing settings:
- tx off sg on tso off
- tx off sg off tso off

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

So it looks like the primary reason is the presence of TX checksumming (the TX feature). Both scatter-gather (the "sg" feature) and TX checksumming ("tx") are required for the TSO feature to work:

https://github.com/torvalds/linux/blob/219d54332a09e8d8741c1e1982f5eae56099de85/net/core/dev.c#L8620-L8637

When it comes to checksumming it is about L4 packets and there are multiple cases:

https://github.com/torvalds/linux/blob/v5.4/include/linux/netdev_features.h#L16

 NETIF_F_IP_CSUM_BIT, /* Can checksum TCP/UDP over IPv4. */
 NETIF_F_HW_CSUM_BIT, /* Can checksum all the packets. */
 NETIF_F_IPV6_CSUM_BIT, /* Can checksum TCP/UDP over IPV6 */

NETIF_F_HW_CSUM means that a device can checksum all L4 packets in hardware and it cannot be enabled together with NETIF_F_IP_CSUM_BIT or NETIF_F_IPV6_CSUM.

It doesn't look like ICMP packets are ever checksummed in hardware so it makes sense that our ICMP flood tests were passing while TCP & UDP traffic generated by iperf3 was affected by the issue.

The NICs in question are Broadcom Netxtreme:
       product: BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

There is a 3-rd party reference to the issue with checksumming but it does not pinpoint a particular limitation in either HW or OVS but recommends disabling checksumming (which also means TSO won't work as checksumming is a prerequisite):

https://supportportal.juniper.net/s/article/NFX-SSH-traffic-to-Ubuntu-VNF-Fails-Possibly-Due-to-TCP-Checksum-Error?language=en_US

OVS lists some limitations when it comes to the userspace datapath & TSO on tunnels which we are not relying on in this case:

https://docs.openvswitch.org/en/latest/topics/userspace-tso/#limitations

Looks like further exploration is needed to see where to understand where the issue with TX checksumming is: i.e. whether it's the OVS system datapath or a particular device/driver that's causing this problem.

summary: - Octavia amphora loadbalancer gets stuck at PENDING_CREATE status
+ tx checksumming offload results in TCP/UDP packet drops (was Octavia
+ amphora loadbalancer gets stuck at PENDING_CREATE status)
Changed in charm-octavia:
status: New → Invalid
Revision history for this message
Ubuntu Kernel Bot (ubuntu-kernel-bot) wrote : Missing required logs.

This bug is missing log files that will aid in diagnosing the problem. While running an Ubuntu kernel (not a mainline or third-party kernel) please enter the following command in a terminal window:

apport-collect 1983468

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: New → Incomplete
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Marked this bug as invalid for charm-octavia and added Linux since this is either a driver problem with bnxt and BCM57414 or a firmware issue.

The card in question: https://buy.hpe.com/us/en/options/adapters/host-adapters/proliant-host-adapters/hpe-ethernet-10-25gb-2-port-sfp28-bcm57414-adapter/p/817718-b21

This NIC is handled by the bnxt driver.
https://github.com/torvalds/linux/blob/3d7cb6b04c3f3115719235cc6866b10326de34cd/drivers/net/ethernet/broadcom/bnxt/bnxt.c#L105

The NIC data sheet advertises "TCP, UDP, and IP checksum offloads" and "Tunnel-aware stateless offloads":
https://docs.broadcom.com/doc/957414A4142CC-DS

Revision history for this message
Przemyslaw Hausman (phausman) wrote :

The environment has customer data, so collecting with apport-collect is unfortunately not possible.

Kernel version on the affected machines: 5.4.0-124-generic

Please see `lspci` and `lshw` output below.

```
sudo lshw -class network | egrep "(ens1f0np0|ens4f0np0)" -A 8 -B 6
  *-network:0
       description: Ethernet interface
       product: BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller
       vendor: Broadcom Inc. and subsidiaries
       physical id: 0
       bus info: pci@0000:12:00.0
       logical name: ens1f0np0
       version: 01
       serial: f4:03:43:ee:89:90
       capacity: 25Gbit/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm vpd msi msix pciexpress bus_master cap_list rom ethernet physical fibre 1000bt-fd 25000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=bnxt_en driverversion=1.10.0 duplex=full firmware=218.0.152.0/pkg 218.0.166000 latency=0 link=yes multicast=yes port=fibre slave=yes
       resources: irq:26 memory:de610000-de61ffff memory:de500000-de5fffff memory:de622000-de623fff memory:de800000-de83ffff
--
  *-network:0
       description: Ethernet interface
       product: BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller
       vendor: Broadcom Inc. and subsidiaries
       physical id: 0
       bus info: pci@0000:af:00.0
       logical name: ens4f0np0
       version: 01
       serial: f4:03:43:ee:89:90
       capacity: 25Gbit/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm vpd msi msix pciexpress bus_master cap_list rom ethernet physical fibre 1000bt-fd 25000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=bnxt_en driverversion=1.10.0 duplex=full firmware=218.0.152.0/pkg 218.0.166000 latency=0 link=yes multicast=yes port=fibre slave=yes
       resources: irq:42 memory:f3a10000-f3a1ffff memory:f3900000-f39fffff memory:f3a22000-f3a23fff memory:f3c00000-f3c3ffff

sudo lspci | grep 57414
12:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
12:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
af:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
af:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
```

Changed in linux (Ubuntu):
status: Incomplete → Confirmed
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

As far as looking at the practice of using stateless offload for tunneled traffic:

I can see that there were tests performed on Intel HW in upstream OpenStack with TX & TSO features enabled (albeit with VXLAN tunnels):
https://docs.openstack.org/developer/performance-docs/test_plans/hardware_features/hardware_offloads/plan.html
https://docs.openstack.org/developer/performance-docs/test_results/hardware_features/hardware_offloads/test_results.html#hw-features-offloads

So, in general, I don't see any documented restrictions on the kernel OVS data path in terms of having stateless offloads enabled.

There was an old bug that had similar symptoms: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1629053

But it couldn't be reproduced eventually:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1629053/comments/3
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1629053/comments/11

This paper also discusses using checksumming offload in the context of the OVS datapath:
https://conferences.sigcomm.org/sigcomm/2021/files/papers/3452296.3472914.pdf

Revision history for this message
Marcelo Subtil Marcal (msmarcal) wrote :

Update:

Using the Focal HWE kernel (5.15.0-46-generic #49~20.04.1-Ubuntu SMP), this error does not happen.

Revision history for this message
Peter Jose De Sousa (pjds) wrote :

Firmware version 218.0.152.0 is mentioned in this HPE advisory regarding packet drop: https://support.hpe.com/hpesc/public/docDisplay?docId=a00120518en_us&docLocale=en_US

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.