vfio-pci module not loaded if vswitch_type=none
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Fix Released
|
High
|
Steven Webster |
Bug Description
Brief Description
-----------------
If the vswitch_type of the system is set to "none", the vfio-pci module is not loaded on worker nodes. This makes sense for an openstack enabled system. However, with the introduction of the multus/sriov CNI plugins, this can cause an issue on a non-openstack enabled node if the user wants to use a DPDK enabled NetworkAttachme
The sriov-cni will try to bind a device to the vfio-pci driver, but the pod will be unable to launch as the sriov-cni will report an error binding to the unloaded module.
The workaround is to simply load the module manually, but we want to have this occur without user intervention.
Severity
--------
Major: Unable to use a DPDK enabled NetworkAttachme
Steps to Reproduce
------------------
- Configure system with vswitch_type="none"
- Enable the SRIOV device plugin: system host-label-assign sriovdp=enabled
- Create a DPDK enabled NetworkAttachme
- Launch a pod referencing the NetworkAttachme
Expected Behavior
------------------
We should automatically load the vfio-pci module if the label sriovdp=enabled is set. This can probably be achieved with an sriov-cni init container.
Actual Behavior
----------------
User must manually load the module
Reproducibility
---------------
Reproducible
System Configuration
-------
Any system with non-openstack enabled workers and sriovdp=enabled label.
Branch/Pull Time/Commit
-------
Master
Changed in starlingx: | |
assignee: | nobody → Steven Webster (swebster-wr) |
Changed in starlingx: | |
status: | Triaged → In Progress |
Marking as release gating; related to new multus/sriov container support