Comment 3 for bug 2074215

Revision history for this message
Ghadi Rahme (ghadi-rahme) wrote :

verification for focal:
$ nproc: 24

Current machine settings before using -proposed:

1. $uname -r: 5.4.0-192-generic

2. sudo dmesg | grep bnx2x :

[ 5.636662] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.713.36-0 (2014/02/10)
[ 5.735751] bnx2x 0000:04:00.0: msix capability found
[ 5.750740] bnx2x 0000:04:00.0: part number 0-0-0-0
[ 6.135089] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 6.160661] bnx2x 0000:04:00.1: msix capability found
[ 6.199296] bnx2x 0000:04:00.1: part number 0-0-0-0
[ 6.872498] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 7.002297] bnx2x 0000:04:00.1 eno50: renamed from eth0
[ 7.045651] bnx2x 0000:04:00.0 eno49: renamed from eth3

3. We can see that there are two interfaces on this machine utilizing the bnx2x driver. As noted in the description increasing the num_queue variable value will not result in any UBSAN warnings since 5.4 has them disabled:
$ sudo modprobe -r bnx2x # remove the driver
$ sudo modprobe bnx2x num_queues=20 # reload the driver while exceeding the 15 queue size limit
$ sudo dmesg | grep bnx2x
...<cut output>...
[ 1916.385403] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.713.36-0 (2014/02/10)
[ 1916.385657] bnx2x 0000:04:00.0: msix capability found
[ 1916.405826] bnx2x 0000:04:00.0: part number 0-0-0-0
[ 1916.571785] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 1916.571889] bnx2x 0000:04:00.1: msix capability found
[ 1916.576504] bnx2x 0000:04:00.0 eno49: renamed from eth0
[ 1916.589777] bnx2x 0000:04:00.1: part number 0-0-0-0
[ 1917.291516] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 67 fp[0] 69 ... fp[19] 118
[ 1918.062391] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 1918.064708] bnx2x 0000:04:00.1 eno50: renamed from eth0
[ 1918.711960] bnx2x 0000:04:00.1 eno50: using MSI-X IRQs: sp 119 fp[0] 121 ... fp[19] 140
[ 1925.737490] bnx2x 0000:04:00.0 eno49: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
[ 1926.017458] bnx2x 0000:04:00.1 eno50: NIC Link is Up, 10000 Mbps full duplex, Flow control: none

$ sudo dmesg | grep UBSAN
<no result>
4. We know that the machine is accessing data out of bounds but the kernel is not reporting it. Let's upgrade to -proposed and see if the machine remains stable.

After upgrading to -proposed:

1. $uname -r: 5.4.0-195-generic

2. sudo dmesg | grep bnx2x :

[ 5.506585] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.713.36-0 (2014/02/10)
[ 5.558991] bnx2x 0000:04:00.0: msix capability found
[ 5.574191] bnx2x 0000:04:00.0: part number 0-0-0-0
[ 5.987916] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 6.009746] bnx2x 0000:04:00.1: msix capability found
[ 6.056350] bnx2x 0000:04:00.1: part number 0-0-0-0
[ 6.751651] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 6.885873] bnx2x 0000:04:00.0 eno49: renamed from eth2
[ 6.929575] bnx2x 0000:04:00.1 eno50: renamed from eth0
[ 19.510740] bnx2x 0000:04:00.1 eno50: using MSI-X IRQs: sp 77 fp[0] 79 ... fp[7] 86
[ 20.986392] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 67 fp[0] 69 ... fp[7] 76
[ 28.077341] bnx2x 0000:04:00.0 eno49: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
[ 28.129317] bnx2x 0000:04:00.1 eno50: NIC Link is Up, 10000 Mbps full duplex, Flow control: none

3. sudo modprobe -r bnx2x
4. sudo modprobe bnx2x num_queues=20

[ 319.495823] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.713.36-0 (2014/02/10)
[ 319.495993] bnx2x 0000:04:00.0: msix capability found
[ 319.512977] bnx2x 0000:04:00.0: part number 0-0-0-0
[ 319.680625] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 319.680740] bnx2x 0000:04:00.1: msix capability found
[ 319.685428] bnx2x 0000:04:00.0 eno49: renamed from eth0
[ 319.698778] bnx2x 0000:04:00.1: part number 0-0-0-0
[ 320.402118] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 67 fp[0] 69 ... fp[19] 118
[ 321.171427] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 321.173902] bnx2x 0000:04:00.1 eno50: renamed from eth0
[ 321.811262] bnx2x 0000:04:00.1 eno50: using MSI-X IRQs: sp 119 fp[0] 121 ... fp[19] 140
[ 327.872503] bnx2x 0000:04:00.0 eno49: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
[ 329.525089] bnx2x 0000:04:00.1 eno50: NIC Link is Up, 10000 Mbps full duplex, Flow control: none

5. We can see that the network driver still loads correctly even after the patch is applied.

6. Running iperf3 -s on the server with my machine as the client we can see that the driver is also working as expected:

[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 537 KBytes 4.40 Mbits/sec
[ 5] 1.00-2.00 sec 1.35 MBytes 11.3 Mbits/sec
[ 5] 2.00-3.00 sec 1.76 MBytes 14.8 Mbits/sec
[ 5] 3.00-4.00 sec 1.75 MBytes 14.7 Mbits/sec
[ 5] 4.00-5.00 sec 1.83 MBytes 15.4 Mbits/sec
[ 5] 5.00-6.00 sec 1.79 MBytes 15.0 Mbits/sec
[ 5] 6.00-7.00 sec 1.86 MBytes 15.6 Mbits/sec
[ 5] 7.00-8.00 sec 2.27 MBytes 19.0 Mbits/sec
[ 5] 8.00-9.00 sec 2.48 MBytes 20.8 Mbits/sec
[ 5] 9.00-10.00 sec 2.60 MBytes 21.8 Mbits/sec
[ 5] 10.00-10.18 sec 497 KBytes 22.3 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.18 sec 18.7 MBytes 15.4 Mbits/sec receiver