verification Jammy: nproc: 24 Before using -proposed: 1. $ uname -r: 5.15.0-118-generic 2. $ sudo dmesg | grep bnx2x: [ 2.656536] bnx2x 0000:04:00.0: msix capability found [ 2.669166] bnx2x 0000:04:00.0: part number 0-0-0-0 [ 3.133782] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 3.201230] bnx2x 0000:04:00.1: msix capability found [ 3.201815] bnx2x 0000:04:00.1: part number 0-0-0-0 [ 3.402127] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 3.407664] bnx2x 0000:04:00.0 eno49: renamed from eth0 [ 3.492325] bnx2x 0000:04:00.1 eno50: renamed from eth1 [ 56.145698] bnx2x 0000:04:00.1 eno50: using MSI-X IRQs: sp 78 fp[0] 80 ... fp[7] 87 [ 57.381769] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 68 fp[0] 70 ... fp[7] 77 [ 64.772106] bnx2x 0000:04:00.0 eno49: NIC Link is Up, 10000 Mbps full duplex, Flow control: none [ 65.732116] bnx2x 0000:04:00.1 eno50: NIC Link is Up, 10000 Mbps full duplex, Flow control: none 3. We can see that there are two interfaces on this machine utilizing the bnx2x driver. As noted in the description increasing the num_queue variable value will not result in any UBSAN warnings since 5.15 has them disabled: $ sudo modprobe -r bnx2x $ sudo modprobe bnx2x num_queues=20 $ sduo dmesg | grep bnx2x ...... [ 621.562054] bnx2x 0000:04:00.0: msix capability found [ 621.581949] bnx2x 0000:04:00.0: part number 0-0-0-0 [ 621.754136] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 621.754254] bnx2x 0000:04:00.1: msix capability found [ 621.758926] bnx2x 0000:04:00.0 eno49: renamed from eth0 [ 621.773993] bnx2x 0000:04:00.1: part number 0-0-0-0 [ 622.513115] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 68 fp[0] 70 ... fp[19] 119 [ 623.282738] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 623.284540] bnx2x 0000:04:00.1 eno50: renamed from eth0 $ sudo dmesg | grep UBSAN 4. We know that the machine is accessing data out of bounds but the kernel is not reporting it. Let's upgrade to -proposed and see if the machine remains stable. After upgrading to -proposed: 1. $ uname -r: 5.15.0-120-generic 2. sudo dmesg | grep bnx2x [ 4.303867] bnx2x 0000:04:00.0: msix capability found [ 4.317050] bnx2x 0000:04:00.0: part number 0-0-0-0 [ 4.883254] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 4.951123] bnx2x 0000:04:00.1: msix capability found [ 4.951779] bnx2x 0000:04:00.1: part number 0-0-0-0 [ 5.200782] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 5.206714] bnx2x 0000:04:00.0 eno49: renamed from eth0 [ 5.293926] bnx2x 0000:04:00.1 eno50: renamed from eth1 [ 19.194753] bnx2x 0000:04:00.1 eno50: using MSI-X IRQs: sp 78 fp[0] 80 ... fp[7] 87 [ 20.430462] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 68 fp[0] 70 ... fp[7] 77 [ 27.457468] bnx2x 0000:04:00.1 eno50: NIC Link is Up, 10000 Mbps full duplex, Flow control: none [ 27.637478] bnx2x 0000:04:00.0 eno49: NIC Link is Up, 10000 Mbps full duplex, Flow control: none 3. $ sudo modprobe -r bnx2x 4. $ sudo modrpobe bnx2x num_queues=20 ...... [ 414.537575] bnx2x 0000:04:00.0: msix capability found [ 414.556346] bnx2x 0000:04:00.0: part number 0-0-0-0 [ 414.722808] bnx2x 0000:04:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 414.722901] bnx2x 0000:04:00.1: msix capability found [ 414.725755] bnx2x 0000:04:00.0 eno49: renamed from eth0 [ 414.740419] bnx2x 0000:04:00.1: part number 0-0-0-0 [ 415.452007] bnx2x 0000:04:00.0 eno49: using MSI-X IRQs: sp 68 fp[0] 70 ... fp[19] 119 [ 416.220670] bnx2x 0000:04:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) [ 416.223125] bnx2x 0000:04:00.1 eno50: renamed from eth0 [ 422.644115] bnx2x 0000:04:00.0 eno49: NIC Link is Up, 10000 Mbps full duplex, Flow control: none 5. We can see that the network driver still loads correctly even after the patch is applied. 6. Running iperf3 -s on the server with my machine as the client we can see that the driver is also working as expected (The low upload speed in this test is unrelated to the driver and is due to limitations with my connection to the server): [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 268 KBytes 2.19 Mbits/sec [ 5] 1.00-2.00 sec 589 KBytes 4.83 Mbits/sec [ 5] 2.00-3.00 sec 849 KBytes 6.95 Mbits/sec [ 5] 3.00-4.00 sec 490 KBytes 4.01 Mbits/sec [ 5] 4.00-5.00 sec 609 KBytes 4.99 Mbits/sec [ 5] 5.00-6.00 sec 648 KBytes 5.31 Mbits/sec [ 5] 6.00-7.00 sec 648 KBytes 5.31 Mbits/sec [ 5] 7.00-8.00 sec 692 KBytes 5.67 Mbits/sec [ 5] 8.00-9.00 sec 631 KBytes 5.17 Mbits/sec [ 5] 9.00-10.00 sec 653 KBytes 5.35 Mbits/sec [ 5] 10.00-10.13 sec 105 KBytes 6.78 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.13 sec 6.04 MBytes 5.00 Mbits/sec receiver