kube-sriovdp container in crash loop: can't find /etc/pcidp/config.json
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Won't Fix
|
Low
|
Steven Webster |
Bug Description
Brief Description
-----------------
R4.0 AIO-DX bare-metal installed. vswitch set to ovs-dpdk. kube-sriovdp container in crash loop. And get a bunch of DPDK EAL error logs.
[sysadmin@
kube-system kube-sriov-
kube-system kube-sriov-
NIC is Intel 82599. Server is HP BL465c.
daemon.log:
020-08-
2020-08-
2020-08-
2020-08-
dmesg has this
[20019.001877] vfio-pci 0000:87:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.
[20019.166854] vfio-pci 0000:87:00.1: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.
[sysadmin@
87:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01)
87:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01)
[sysadmin@
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
model name : Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
[sysadmin@
BOOT_IMAGE=
[sysadmin@
+------
| hostname | label key | label value |
+------
| controller-0 | openstack-
| controller-0 | openstack-
| controller-0 | openvswitch | enabled |
| controller-0 | sriov | enabled |
| controller-0 | sriovdp | enabled |
+------
[sysadmin@
+------
| uuid | log_c | processor | phy_c | thread | processor_model | assigned_function |
| | ore | | ore | | | |
+------
| 49d2be67-
| decf7d5d-
| 9e58b2e5-
| ec9d85bb-
| dc7942b3-
| 242e85fa-
+------
Reproducibility
---------------
Reproducible
Steps followed from documentation:
system host-label-assign ${NODE} sriovdp=enabled
system host-memory-modify ${NODE} 0 -1G 32
system host-memory-modify ${NODE} 1 -1G 32
system host-label-assign ${NODE} openstack-
system host-label-assign ${NODE} openstack-
system host-label-assign ${NODE} openvswitch=enabled
system host-label-assign ${NODE} sriov=enabled
system modify --vswitch_type ovs-dpdk
system host-cpu-modify -f vswitch -p0 1 controller-0
system host-memory-modify -f vswitch -1G 1 controller-0 0
system host-memory-modify -f vswitch -1G 1 controller-0 1
Note: I have run CentOS 7 + DPDK applications on the same server, so is this something to do w/ StarlingX module settings?
Changed in starlingx: | |
assignee: | nobody → Steven Webster (swebster-wr) |
I did BIOS changes to disable "Shared memory", per: /community. hpe.com/ t5/proliant- servers- ml-dl-sl/ device- is-ineligible- for-iommu- domain- attach- due-to- platform/ td-p/6751904
https:/
The dmesg and DPDP EAL error logs disappeared, but the container is still in crashloop.
[sysadmin@ controller- 0 ~(keystone_admin)]$ dmesg | grep -e DMAR -e IOMMU | more
[ 0.000000] ACPI: DMAR 0x000000007B7E7000 0002AC (v01 HP ProLiant 00000001 HP 00000001)
[ 0.000000] DMAR: IOMMU enabled