netplan fails to create NIC VFs at boot

Bug #1999181 reported by Enoch Leung
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Netplan
New
Medium
Unassigned

Bug Description

I have a BCM57406 installed in my machine, and netplan fails to create VF(s) at boot. Once booting is completed, I can create VF by "netplan apply", but it means I cannot autostart my VMs which depend on the VFs. Error could be Broadcom specific, as I have no such problem with Intel I350-AM2.

Reproducible: always

----- selected dmesg output -----
[ 0.942238] bnxt_en 0000:02:00.0 eth0: Broadcom BCM57406 NetXtreme-E 10GBase-T Ethernet found at mem 2000000000, node addr 00:0a:f7:**:**:**
[ 0.943436] bnxt_en 0000:02:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
[ 0.996415] bnxt_en 0000:02:00.1 eth1: Broadcom BCM57406 NetXtreme-E 10GBase-T Ethernet found at mem 2000020000, node addr 00:0a:f7:**:**:**
[ 0.997730] bnxt_en 0000:02:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
[ 1.789199] bnxt_en 0000:02:00.0 enp2s0f0np0: renamed from eth0
[ 1.807616] bnxt_en 0000:02:00.1 enp2s0f1np1: renamed from eth1
[ 3.811616] bnxt_en 0000:02:00.1 bcm57406_p1: renamed from enp2s0f1np1
[ 3.838499] bnxt_en 0000:02:00.0 bcm57406_p0: renamed from enp2s0f0np0
[ 4.285390] bnxt_en 0000:02:00.0 bcm57406_p0: Reject SRIOV config request since if is down!
[ 4.285394] bnxt_en 0000:02:00.0: 2 VFs requested; only 0 enabled
[ 4.288718] bnxt_en 0000:02:00.0 bcm57406_p0: Reject SRIOV config request since if is down!
[ 4.288723] bnxt_en 0000:02:00.0: 2 VFs requested; only 0 enabled
[ 364.411119] bnxt_en 0000:02:02.0: enabling device (0000 -> 0002)
[ 364.431524] bnxt_en 0000:02:02.0 eth0: Broadcom NetXtreme-E Ethernet Virtual Function found at mem 2000040000, node addr **:**:**:**:**:**
[ 364.431556] bnxt_en 0000:02:02.0: 0.000 Gb/s available PCIe bandwidth (Unknown x8 link)
[ 364.445920] bnxt_en 0000:02:02.0 enp2s0f0v0: renamed from eth0
[ 364.447002] bnxt_en 0000:02:02.1: enabling device (0000 -> 0002)
[ 364.506045] bnxt_en 0000:02:02.1 eth0: Broadcom NetXtreme-E Ethernet Virtual Function found at mem 2000044000, node addr **:**:**:**:**:**
[ 364.506058] bnxt_en 0000:02:02.1: 0.000 Gb/s available PCIe bandwidth (Unknown x8 link)
[ 364.508217] bnxt_en 0000:02:02.1 enp2s0f0v1: renamed from eth0

Enoch Leung (leun0036)
affects: sriov-netplan-shim → netplan
Revision history for this message
Lukas Märdian (slyon) wrote :

Yes, this is likely HW related.

"[ 4.285390] bnxt_en 0000:02:00.0 bcm57406_p0: Reject SRIOV config request since if is down!"

Could you please provide some systemd/journalctl logs when booting that system? And maybe check if masking "netplan-sriov-rebind.service" changes anything in its behavior?

Changed in netplan:
importance: Undecided → Medium
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for netplan because there has been no activity for 60 days.]

Changed in netplan:
status: Incomplete → Expired
Revision history for this message
Enoch Leung (leun0036) wrote (last edit ):
Download full text (20.6 KiB)

Sorry, was busy and can't put back the Broadcom card until now

journalctl output seems better than dmesg; skipped some unrelated entries like ZRAM, replaced hostname, deleted other NIC entries etc.
-------------------------------------------------
Apr 27 15:57:36 [hostname] systemd[1]: Stopping Load/Save Random Seed...
...skipping...
Apr 27 15:59:03 [hostname] kernel: bnxt_en 0000:02:00.0 bcm57406_pf0: renamed from eth0
Apr 27 15:59:03 [hostname] systemd-networkd[332]: eth0: Interface name change detected, renamed to bcm57406_pf0.
Apr 27 15:59:03 [hostname] systemd-networkd[332]: wan: Link UP
Apr 27 15:59:03 [hostname] systemd-networkd[332]: br-lan: Link UP
Apr 27 15:59:03 [hostname] kernel: bnxt_en 0000:02:00.1 bcm57406_pf1: renamed from eth1
Apr 27 15:59:03 [hostname] systemd-networkd[332]: eth1: Interface name change detected, renamed to bcm57406_pf1.
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.1 bcm57406_pf1: Reject SRIOV config request since if is down!
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.1: 4 VFs requested; only 0 enabled
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled
Apr 27 15:59:04 [hostname] systemd-fsck[1258]: BOOT: clean, 302/40960 files, 49407/163840 blocks
Apr 27 15:59:04 [hostname] systemd[1]: Mounting /boot...
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.1 bcm57406_pf1: Reject SRIOV config request since if is down!
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.1: 4 VFs requested; only 0 enabled
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
Apr 27 15:59:04 [hostname] kernel: bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled
Apr 27 15:59:04 [hostname] kernel: EXT4-fs (sdf2): mounting ext2 file system using the ext4 subsystem
Apr 27 15:59:04 [hostname] kernel: EXT4-fs (sdf2): mounted filesystem without journal. Opts: (null). Quota mode: none.
Apr 27 15:59:04 [hostname] systemd[1]: Mounted /boot.
Apr 27 15:59:04 [hostname] systemd-networkd[332]: bcm57406_pf0: Link UP
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr 27 15:59:04 [hostname] systemd-networkd[332]: bcm57406_pf1: Link UP
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf1
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf1
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf1
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf1
Apr 27 15:59:04 [hostname] systemd-networkd-wait-online[376]: managing: bcm57406_pf0
Apr...

Lukas Märdian (slyon)
Changed in netplan:
status: Expired → New
Revision history for this message
Lukas Märdian (slyon) wrote :

Thank you for providing additional logs. I see networkd-dispatcher and NetworkManager doing some stuff in there.

Do you have networkd-dispatcher or NM dispatcher hooks (or other setup/configuration, e.g. in Networkmanager or udev) in place which does the enablement of the bnxt_en driver? Or are you aware of any hardware specific enablement sequence we need to provide to bring it up?

I don't have this hardware and don't know any details, but it looks like the driver is not yet ready at early boot when Netplan/networkd try to configure it. Then something brings it fully up (might just be more time needed?) and "netplan apply" succeeds...

Revision history for this message
Enoch Leung (leun0036) wrote (last edit ):

No, all I did was modifying /etc/netplan/01-netcfg.yaml and then netplan generate to see if configuration is fine. Never use hooks scripts etc. as I don't need anyway. Based on dmesg again:

[ 13.703086] bnxt_en 0000:02:00.0 eth0: Broadcom BCM57406 NetXtreme-E 10GBase-T Ethernet found at mem 2000000000, node addr **:**:**:**:**:**
[ 13.703096] bnxt_en 0000:02:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
[ 14.205923] bnxt_en 0000:02:00.0 bcm57406_pf0: renamed from eth0
[ 14.535896] bnxt_en 0000:02:00.1 bcm57406_pf1: Reject SRIOV config request since if is down!
[ 14.535901] bnxt_en 0000:02:00.1: 4 VFs requested; only 0 enabled
[ 14.535989] bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
[ 14.535991] bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled
[ 14.545014] bnxt_en 0000:02:00.1 bcm57406_pf1: Reject SRIOV config request since if is down!
[ 14.545019] bnxt_en 0000:02:00.1: 4 VFs requested; only 0 enabled
[ 14.545101] bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
[ 14.545103] bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled
[ 14.892797] bnxt_en 0000:02:00.0 bcm57406_pf0: NIC Link is Up, 1000 Mbps full duplex, Flow control: ON - receive & transmit
[ 14.892802] bnxt_en 0000:02:00.0 bcm57406_pf0: EEE is not active
[ 14.892805] bnxt_en 0000:02:00.0 bcm57406_pf0: FEC autoneg on encoding: None

Note that I have port #0 of the card connecting to a switch, and no cable on port #1; think you are right that it may just need more time, or the link state has to have reached a particular state before VFs can be requested? not sure if it affects the same family of cards (BCM5740x / BCM5741x), or the newer (BCM5750x) ones also. the cards are available on OEM servers like those from Dell (the one I have) or IBM or HP, but I just install mine to a self-assembled NAS, hoping to make use of the VFs in a "router / AP VM" with VEB. I could also use X540, but that one lacks features like VEB which I want.

Revision history for this message
Enoch Leung (leun0036) wrote (last edit ):

Just in case, here's my netplan config; I have the machine working as server w/o any GUI or Xserver, and since it was upgraded from older LTS, it has no snap etc. as well. The base config is "ubuntu-minimal", and then I have installed a few more packages like qemu and libvirtd to make it works as VM host, and that's it.

---------------------------------------------------
network:
  version: 2
  renderer: networkd
  ethernets:
    # BCM57406 - PCIe 3.0 x8 (x4), up to 63 VF per port
    bcm57406_pf0:
      match:
        macaddress: **:**:**:**:**:**
      set-name: bcm57406_pf0
      virtual-function-count: 4
# embedded-switch-mode: switchdev
# delay-virtual-functions-rebind: true
    bcm57406_pf1:
      match:
        macaddress: **:**:**:**:**:**
      set-name: bcm57406_pf1
      virtual-function-count: 4

    # RTL8111H - onboard GbE
    rtl8111h:
      match:
        macaddress: **:**:**:**:**:**
      set-name: rth8111h

    # RTL8156B - generic USB 3.0 2.5GbE Dongle
    rtl8156b:
      match:
        macaddress: **:**:**:**:**:**
      set-name: rtl8156b

    # AX88179 - ASUS USB 3.0 GbE Dongle
    ax88179:
      match:
        macaddress: **:**:**:**:**:**
      set-name: ax88179

  bridges:
    wan:
      interfaces: [ rtl8156b ]
      macaddress: **:**:**:**:**:**
      dhcp4: yes
      routes:
        - to: *.*.*.*/*
          via: *.*.*.*
          on-link: true
      dhcp-identifier: mac

Revision history for this message
Lukas Märdian (slyon) wrote :

Thanks for providing additional context! It is really hard for me to debug this from remote, not having access to any such hardware.

I guess we should try to find out what exactly triggers this error message and under which conditions:
"[ 4.285390] bnxt_en 0000:02:00.0 bcm57406_p0: Reject SRIOV config request since if is down!"

After a re-boot your VFs are not configured. In that state could you please check the values of

/sys/class/net/YOUR_PF_DEVICE/device/sriov_numvfs
/sys/class/net/YOUR_PF_DEVICE/device/sriov_totalvfs

Then try creating the VFs manually, doing something like this (once with the PF being in an UP state and once being in an DOWN state):

$ echo 4 > /sys/class/net/YOUR_PF_DEVICE/device/sriov_numvfs

Then check if this triggers the same error message in dmesg (and under which conditions)?

I'll also try to reach out to another team which might have more context on (and maybe access to) SR-IOV hardware.

Revision history for this message
Lukas Märdian (slyon) wrote :

Another thing worth a try could be the enablement of SR-IOV and pre-configuration of the virtual-function count inside the BIOS settings, and testing variations with the IOMMU kernel parameter.

see "SR-IOV: Configuration and Use Case Examples" (pp 57):
https://dl.dell.com/manuals/all-products/esuprt_ser_stor_net/esuprt_pedge_srvr_ethnt_nic/broadcom-netxtreme-adapters_users-guide3_en-us.pdf

The document above also contains one interesting note: "Note: Ensure that the PF interfaces are up. VFs are only created if PFs are up. X is the number of VFs that will be exported to the OS."

=> Netplan is running the udev rule /run/udev/rules.d/99-sriov-netplan-setup.rules:
ACTION=="add", SUBSYSTEM=="net", ATTRS{sriov_totalvfs}=="?*", RUN+="/usr/sbin/netplan apply --sriov-only"

Which might be triggered as soon as the SR-IOV device is detected and the PFs might not yet be up. Maybe you could create an udev rule that RUNs the same script, but only after the PF is UP...

Revision history for this message
Enoch Leung (leun0036) wrote (last edit ):
Download full text (3.7 KiB)

Reply to #7
----------------------------
After reboot, part of dmesg
============================
[ 4.895696] bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
[ 4.895698] bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled
[ 4.895831] bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
[ 4.895834] bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled
[ 5.039112] bnxt_en 0000:02:00.0 bcm57406_pf0: NIC Link is Up, 1000 Mbps full duplex, Flow control: ON - receive & transmit
[ 5.039116] bnxt_en 0000:02:00.0 bcm57406_pf0: EEE is not active
[ 5.039118] bnxt_en 0000:02:00.0 bcm57406_pf0: FEC autoneg on encoding: None
============================

now do cat
============================
/sys/class/net/bcm57406_pf0/device/sriov_numvfs = 0
/sys/class/net/bcm57406_pf0/device/sriov_totalvfs = 16 (I set it to 16 only long ago)

just in case, ip link
============================
2: bcm57406_pf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff

now enable VFs manually
============================
echo 4 > /sys/class/net/bcm57406_pf0/device/sriov_numvfs

VFs created as expected after echo command above (did that before + long ago), ip link output...
============================
2: bcm57406_pf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
    vf 0 link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off
    vf 1 link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off
    vf 2 link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off
    vf 3 link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off

reboot, and put PF to DOWN state
============================
ip link set dev bcm57406_pf0 down
2: bcm57406_pf0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
============================
/sys/class/net/bcm57406_pf0/device/sriov_numvfs = 0
/sys/class/net/bcm57406_pf0/device/sriov_totalvfs = 16
echo 4 > /sys/class/net/bcm57406_pf0/device/sriov_numvfs

No VF created by the echo command above
============================
[ 258.882374] bnxt_en 0000:02:00.0 bcm57406_pf0: Reject SRIOV config request since if is down!
[ 258.882385] bnxt_en 0000:02:00.0: 4 VFs requested; only 0 enabled

Reply to #8
----------------------------
There is no such option as to create VFs inside the NIC BIOS; it allows only setting the maximum support VF (sriov_totalvfs). Just run "/usr/sbin/netplan apply --sriov-only" after reboot again, and all VFs are created successfully. So very likely this issue is related to this "Ensure that the PF interfaces are up. VFs are only created if PFs are up" remark.

Note that I intentionally have one port connected with cable, and one port have no cable. In both case I can create VFs...

Read more...

Revision history for this message
Lukas Märdian (slyon) wrote :

> There is no such option as to create VFs inside the NIC BIOS; it allows only setting the maximum support VF (sriov_totalvfs). Just run "/usr/sbin/netplan apply --sriov-only" after reboot again, and all VFs are created successfully. So very likely this issue is related to this "Ensure that the PF interfaces are up. VFs are only created if PFs are up" remark.

Right, so this is most probably a timing issue.

> And as a side note, I also tried a X540 (as well as I350 mentioned above), and there is no such problem. VFs are created fine with existing netplan after reboot, with or without cable connected. Never tried to put these Intel cards into DOWN state and try to create VFs though.

So it might be a hardware dependent race condition in the bringup of the interface.

I think we should try experimenting with some custom udev rules, as described in comment #8 and then see if those should be integrated with Netplan or maybe the driver needs fixing to bring up the interface in time.

Revision history for this message
Enoch Leung (leun0036) wrote :

if you may provide the udev rule, I can put that to test. not really familiar on udev rules and on detecting whether an eth interface is up yet, let alone only run it for perhaps device using the Broadcom bnxt_en kernel module for the card in question.

Revision history for this message
Enoch Leung (leun0036) wrote :

I did some research, and found that ppl have tried udev rule with ethernet device, like triggering udev rule when detecting an interface state is change (say from DOWN to UP), but it does not work.

https://stackoverflow.com/questions/40676914/how-to-set-up-a-udev-rule-for-eth-link-down-link-up

using rsyslog by text parsing sounds possible as mentioned above, but it is not a good solution nonetheless. maybe calling inotifywait as mentioned? as a quirk workaround inside netplan?

Revision history for this message
Lukas Märdian (slyon) wrote :

I don't think adding what seems to be a hardware specific quirk into Netplan is the proper solution here.
Maybe the driver for this BCM57406 (bnxt_en) device needs to be fixed, instead? To accept SR-IOV configuration at boot time.

> And as a side note, I also tried a X540 (as well as I350 mentioned above), and there is no such problem. VFs are created fine with existing netplan after reboot, with or without cable connected. Never tried to put these Intel cards into DOWN state and try to create VFs though.

Did you have any chance to double-check, how the X540 behaves when it's in a DOWN state?

> Note that I intentionally have one port connected with cable, and one port have no cable. In both case I can create VFs, though I kinda remember that if I use not VEB, but none (or VEPA as well), it may not create VF either unless I have cable connected to the particular port. Can't confirm easily as I have my machine headless, but that's not as bad as the current issue for sure

You might want to experiment with Netplan's "ignore-carrier: true" setting, in order to ignore the disconnected cable. But I don't have high hopes, as this is mostly working on the systemd-networkd layer, while this issue seems to happen on the kernel/driver/sysfs layer.

Revision history for this message
Enoch Leung (leun0036) wrote :
Download full text (4.1 KiB)

Seems like X540 is fine, can create VFs in DOWN state, not like Broadcom; just that X540 is missing quite a bit of interesting features like VEPA and VEB that Broadcom offers...

dmesg | grep ixgbe
-----------------------
[ 3.673668] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 3.673671] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 3.681029] ixgbe 0000:01:00.0: enabling device (0000 -> 0002)
[ 4.026720] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 4, Tx Queue count = 4 XDP Queue count = 0
[ 4.148607] ixgbe 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
[ 4.183467] ixgbe 0000:01:00.0: MAC: 3, PHY: 0, PBA No: 000000-000
[ 4.183471] ixgbe 0000:01:00.0: **:**:**:**:**:**
[ 4.389907] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
[ 4.392301] ixgbe 0000:01:00.1: enabling device (0000 -> 0002)
[ 4.685988] ixgbe 0000:01:00.1: Multiqueue Enabled: Rx Queue count = 4, Tx Queue count = 4 XDP Queue count = 0
[ 4.702556] ixgbe 0000:01:00.0 eth2: SR-IOV enabled with 4 VFs
[ 4.774971] ixgbe 0000:01:00.1: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:1c.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
[ 4.798867] ixgbe 0000:01:00.1: MAC: 3, PHY: 0, PBA No: 000000-000
[ 4.798871] ixgbe 0000:01:00.1: **:**:**:**:**:**
[ 5.022018] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 4, Tx Queue count = 4 XDP Queue count = 0
[ 5.041904] ixgbe 0000:01:00.1: Intel(R) 10 Gigabit Network Connection
[ 5.068702] ixgbe 0000:01:00.1 x540_p1: renamed from eth1
[ 5.190445] ixgbe 0000:01:00.0 x540_p0: renamed from eth2
[ 5.196997] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver
[ 5.196999] ixgbevf: Copyright (c) 2009 - 2018 Intel Corporation.
[ 5.220879] ixgbevf 0000:01:10.0: enabling device (0000 -> 0002)
[ 5.232365] ixgbevf 0000:01:10.0: PF still in reset state. Is the PF interface up?
[ 5.232369] ixgbevf 0000:01:10.0: Assigning random MAC address
[ 5.235195] ixgbevf 0000:01:10.0: **:**:**:**:**:**
[ 5.235199] ixgbevf 0000:01:10.0: MAC: 2
[ 5.235200] ixgbevf 0000:01:10.0: Intel(R) X540 Virtual Function

put PF to DOWN state
============================
ip link set dev x540_p0 down
5: x540_p0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 6c:92:bf:5d:de:6c brd ff:ff:ff:ff:ff:ff

can create VF in DOWN state
============================
cat /sys/class/net/x540_p0/device/sriov_numvfs = 0
cat /sys/class/net/x540_p0/device/sriov_totalvfs = 63
echo 4 > /sys/class/net/x540_p0/device/sriov_numvfs
5: x540_p0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
    vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off, query_rss off
    vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off, query_rss off
    vf 2 link/ether 00:00:00:00:0...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.