Net tools cause kernel soft lockup after DPDK touched VirtIO-pci devices
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| dpdk (Ubuntu) |
Medium
|
Christian Ehrhardt | ||
| Xenial |
Undecided
|
Unassigned | ||
| linux (Ubuntu) |
Medium
|
Unassigned | ||
| Xenial |
Undecided
|
Unassigned |
Bug Description
Guys,
I'm facing an issue here with both "ethtool" and "ip", while trying to manage black-listed by DPDK PCI VirtIO devices.
You'll need an Ubuntu Xenial KVM guest, with 4 VirtIO vNIC cards, to run those tests
PCI device example from inside a Xenial guest:
---
# lspci | grep Ethernet
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 Ethernet controller: Red Hat, Inc Virtio network device
00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
00:06.0 Ethernet controller: Red Hat, Inc Virtio network device
---
Where "ens3" is the first / default interface, attached to Libvirt's "default" network. The "ens4" is reserved for "ethtool / ip" tests (attached to another Libvirt's network without IPs or DHCP), "ens5" will be "dpdk0" and "ens6" "dpdk1"...
---
*** How it works?
1- For example, try to enable multi-queue on DPDK's devices, boot your Xenial guest, and run:
ethtool -L ens5 combined 4
ethtool -L ens6 combined 4
2- Install openvswitch-
https:/
service openvswitch-switch stop
service dpdk stop
OVS DPDK Options (/etc/default/
--
DPDK_OPTS='--dpdk -c 0x1 -n 4 --socket-mem 1024 --pci-blacklist 0000:00:
--
service dpdk start
service openvswitch-switch start
- Enable multi-queue on OVS+DPDK inside of the VM:
ovs-vsctl set Open_vSwitch . other_config:
ovs-vsctl set Open_vSwitch . other_config:
* Multi-queue apparently works! ovs-vswitchd consumes more that 100% of CPU, meaning that it multi-queue is there...
*** Where it fails?
1- Reboot the VM and try to run ethtool again (or go straight to 2 below):
ethtool -L ens5 combined 4
2- Try to fire up ens4:
ip link set dev ens4 up
# FAIL! Both commands hangs, consuming 100% of guest's CPU...
So, it looks like a Linux fault, because it is "allowing" the DPDK VirtIO App (a user land App), to interfere with kernel devices in a strange way...
Best,
Thiago
ProblemType: Bug
DistroRelease: Ubuntu 16.04
Package: linux-image-
ProcVersionSign
Uname: Linux 4.4.0-18-generic x86_64
AlsaDevices:
total 0
crw-rw---- 1 root audio 116, 1 Apr 14 00:35 seq
crw-rw---- 1 root audio 116, 33 Apr 14 00:35 timer
AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
ApportVersion: 2.20.1-0ubuntu1
Architecture: amd64
ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
AudioDevicesInUse: Error: [Errno 2] No such file or directory: 'fuser'
CRDA: N/A
Date: Thu Apr 14 01:27:27 2016
HibernationDevice: RESUME=
InstallationDate: Installed on 2016-04-07 (7 days ago)
InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Beta amd64 (20160406)
IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
Lsusb:
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
MachineType: QEMU Standard PC (i440FX + PIIX, 1996)
PciMultimedia:
ProcFB: 0 VESA VGA
ProcKernelCmdLine: BOOT_IMAGE=
RelatedPackageV
linux-
linux-
linux-firmware N/A
RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 04/01/2014
dmi.bios.vendor: SeaBIOS
dmi.bios.version: Ubuntu-
dmi.chassis.type: 1
dmi.chassis.vendor: QEMU
dmi.chassis.
dmi.modalias: dmi:bvnSeaBIOS:
dmi.product.name: Standard PC (i440FX + PIIX, 1996)
dmi.product.
dmi.sys.vendor: QEMU
Thiago Martins (martinx) wrote : | #1 |
Changed in linux (Ubuntu): | |
status: | New → Confirmed |
Christian Ehrhardt (paelzer) wrote : Re: Network tools like "ethtool" or "ip" freezes when DPDK Apps are running with VirtIO | #3 |
FYI - I ran out of time today, but since I work on DPDK anyway I'll try to reproduce this tomorrow morning.
Changed in dpdk (Ubuntu): | |
status: | New → Confirmed |
importance: | Undecided → Medium |
assignee: | nobody → ChristianEhrhardt (paelzer) |
Christian Ehrhardt (paelzer) wrote : | #4 |
Repro:
OVS-DPDK starting up seems fine initializing my non-blacklisted card
DPDK_OPTS are '--dpdk -c 0x6 -n 4 --pci-blacklist 0000:00:03.0 -m 2048'
Allowing PMDs on two CPUs consuming 2G of huge pages
Before adding Ports config is done with
ovs-vsctl set Open_vSwitch . other_config:
ovs-vsctl set Open_vSwitch . other_config:
Port is added like:
ovs-vsctl add-port ovsdpdkbr0 dpdk0 -- set Interface dpdk0 type=dpdk
Two PMDs are seen
dpif_netdev|
bridge|INFO|bridge ovsdpdkbr0: added interface dpdk0 on port 1
dpif_netdev(
dpif_netdev(
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
26062 root 10 -10 2816828 105612 16328 R 99.9 1.7 1:09.46 pmd12
26061 root 10 -10 2816828 105612 16328 R 99.9 1.7 1:09.42 pmd13
Now I should be in a similar state as you are.
I know the assumption so far was that the reboot (which resets the #queues on the device) might be involved.
But I first wanted to try what changing queues without reboot would do.
Even setting it down from 4 to 3 (remember I only use 2 actively) goes into the block.
ethtool -L eth1 combined 3
=> I can see a hang of the ethtool program
Good thing: at least it seems we can remove the reboot out of our thinking. Just changing #queues with (?OVS?)DPDK attached seems to be very unhappy.
Please note the discussion that could be related: http://
For confirmation @Thiago - does for you also the ethtool program hang, or the full guest?
Next steps:
- gather debug data on hanging ethtool
- check what happens if we ethtool after we stopped OVS-DPDK (match upstream discussion)
- check what happens if we have testpmd enabled instead of openvswitch-dpdk
Christian Ehrhardt (paelzer) wrote : | #5 |
Appears running:
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
4 0 26330 26263 20 0 7588 980 - R+ pts/2 33:52 \_ ethtool -L eth1 combined 3
All that touches it seems to get affected, so e.g. a ltrace/strace get stuck as well.
Meanwhile the log on virsh console of the guest goes towards soft lockups:
[ 568.394870] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [ethtool:26330]
[ 575.418868] INFO: rcu_sched self-detected stall on CPU
[ 575.419674] 0-...: (14999 ticks this GP) idle=66d/
[ 575.420779] (t=15000 jiffies g=11093 c=11092 q=9690)
More Info in the journal:
NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [ethtool:26330]
Modules linked in: openvswitch nf_defrag_ipv6 nf_conntrack isofs ppdev kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul parport_pc parport joydev serio_raw iscsi_tcp libiscsi_tcp libiscsi scsi_transport_
CPU: 0 PID: 26330 Comm: ethtool Not tainted 4.4.0-18-generic #34-Ubuntu
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-
task: ffff8801b747d280 ti: ffff8800ba58c000 task.ti: ffff8800ba58c000
RIP: 0010:[<
RSP: 0018:ffff8800ba
RAX: 0000000000000000 RBX: ffff8800bba62840 RCX: ffff8801b64a9000
RDX: 000000000000c010 RSI: ffff8800ba58fb64 RDI: ffff8800bba6c400
RBP: ffff8800ba58fbf8 R08: 0000000000000004 R09: ffff8801b9001b00
R10: ffff8801b671b080 R11: 0000000000000246 R12: 0000000000000002
R13: ffff8800ba58fb88 R14: 0000000000000000 R15: 0000000000000004
FS: 00007fb57d56c70
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb57cd7b680 CR3: 00000000ba85a000 CR4: 00000000001406f0
Stack:
ffff8800ba58fc28 ffffea0002ee9882 0000000200000940 0000000000000000
0000000000000000 ffffea0002ee9882 0000000100000942 0000000000000000
0000000000000000 ffff8800ba58fb68 ffff8800ba58fc10 ffff8800ba58fb88
Call Trace:
[<ffffffff815
[<ffffffff815
[<ffffffff817
[<ffffffff817
[<ffffffff811
[<ffffffff817
[<ffffffff817
[<ffffffff811
[<ffffffff817
[<ffffffff811
[<ffffffff816
[<ffffffff816
[<ffffffff812
[<ffffffff810
[<ffffffff812
[<ffffffff818
Code: 44 89 e2 4c 89 6c c5 b0 e8 3b dc ec ff 48 8b 7b 08 e8 f2 db...
Christian Ehrhardt (paelzer) wrote : | #6 |
I tested to change a device touched (initialized) by DPDK, but not yet on an Openvswitch bridge (no port added).
That hangs stalls as well, so it is not required to add it to openvswitch-dpdk.
Next step was to exclude openvswitch completely, therefor I ran testpmd against those ports.
/usr/bin/testpmd --pci-blacklist 0000:00:03.0 --socket-mem 2048 -- --interactive --total-
After the test I ran ethtool and boom, hangs again.
Test gets simpler and simpler.
Christian Ehrhardt (paelzer) wrote : | #7 |
I confirmed that before running testpmd I can change the number of used queues just fine.
Christian Ehrhardt (paelzer) wrote : | #8 |
This is not happeneing on a ixgbe device that formerly was used by DPDK.
But then, that device had to be bound to uio_pci_generic and rebound to be reachable by ethtool.
Now in the virtual environment force a reinit of the driver after DPDK used it
Reinitialize the driver by rebinding it
apt-get install linux-image-
/usr/bin/testpmd --pci-blacklist 0000:00:03.0 --socket-mem 2048 -- --interactive --total-
dpdk_nic_bind -b uio_pci_generic 0000:00:04.0
dpdk_nic_bind -b virtio-pci 0000:00:04.0
We see the device "reinitialized" back on the virtio-pci driver.
It is also back to 1 of 4 queues being used (as after the reboot.
Now this works fine:
ethtool -L eth1 combined 4
ethtool -L eth1 combined 3
ethtool -L eth1 combined 4
summary: |
- Network tools like "ethtool" or "ip" freezes when DPDK Apps are running - with VirtIO + Net tools cause kernel soft lockup after DPDK touched VirtIO-pci + devices |
Christian Ehrhardt (paelzer) wrote : | #9 |
There is no obvious run-until-success loop in any of the involved code.
Only this in virtnet_
/* Spin for a response, the kick causes an ioport write, trapping
* into the hypervisor, so the request should be handled immediately.
*/
while (!virtqueue_
cpu_relax();
We need to catch who is calling whom and how often to get a better idea what is going on when going to get stuck.
Interesting are from the stack:
cpu_relax
virtnet_
virtnet_set_queues
virtnet_
ethtool_
dev_ethtool
cd /sys/kernel/
echo 0 > tracing_on
echo function_graph > current_tracer
tail -f trace
# get global and one on each of our 4 CPUs from trace and per_cpu/
echo 1 > tracing_on
ethtool -L eth1 combined 3
The system is stuck enough that all hang immediately without reporting.
Need to go deeper with debugging, but that is probably monday then.
Changed in linux (Ubuntu): | |
importance: | Undecided → Medium |
Christian Ehrhardt (paelzer) wrote : | #10 |
Other than in the upstream discussion I linked above around a similar - but it seems not related - issue in our case interrupts, memory, and such from lspci and /proc/interrupts stay just "as-is".
No change due to running dpdk on that device.
I'd not even consider it all too broken if the tools say "go away I'm broken" until the device was reinitialized by e.g. the driver reload.
But the hang is too severe.
Christian Ehrhardt (paelzer) wrote : | #11 |
Since ftrace failed me I switched to gdb via the qemu -s parameter.
Debuginfo and source of guest kernel on the Host:
sudo apt-get install linux-tools-
sudo pull-lp-source linux 4.4.0-18.34
sudo mkdir -p /build/
Edit that into the guest and restart:
<domain type='kvm' xmlns:qemu='http://
<qemu:
<qemu:arg value='-s'/>
</qemu:
gdb /usr/lib/
b dev_ethtool
b ethtool_
b virtnet_
b virtnet_set_queues
Then on the guest run
sudo /usr/bin/testpmd --pci-blacklist 0000:00:03.0 --socket-mem 2048 -- --interactive --total-
Attach gdb with
target remote :1234
Then on the guest trigger the bug
sudo ethtool -L eth1 combined 3
It is really "hanging" on that virtnet_
As expected the loop never breaks.
1010 /* Spin for a response, the kick causes an ioport write, trapping
1011 * into the hypervisor, so the request should be handled immediately.
1012 */
1013 while (!virtqueue_
1014 !virtqueue_
1015 cpu_relax();
1016
1017 return vi->ctrl_status == VIRTIO_NET_OK;
(gdb) n
1014 !virtqueue_
(gdb)
1013 while (!virtqueue_
(gdb)
1015 cpu_relax();
[...]
Infinite loop.
Christian Ehrhardt (paelzer) wrote : | #12 |
Breaking on the two check functions and the calling one to see where things break:
b virtnet_
# virtqueue_get_buf gets hit by __do_softirq -> napi_poll -> virtnet_poll -> virtnet_receive -> virtqueue_get_buf all the time.
Need to keep that disabled and step INTO from virtnet_
b virtqueue_get_buf
b virtqueue_is_broken
Here is what we see in the two checkers then
virtqueue_get_buf (_vq=0xffff8801
p *(_vq)
$12 = {list = {next = 0xffff8801b69c8b00, prev = 0xffff8801b640d
index = 8, num_free = 63, priv = 0x1c010}
if (unlikely(
return NULL;
}
ret = vq->data[i];
[...]
return ret;
So this should for sure be valid when returning or we would see the BAD_RING.
But then it is looping on after returning on
while !virtqueue_
So we "should be" (tm) safe to assume that we always get a good buffer back, but then lack one?
Too much is optimized out by default to take a much deeper look.
I need to understand more what happens there, so I'm going to recompile the kernel with extra stuff, more debug and less optimization.
Christian Ehrhardt (paelzer) wrote : | #13 |
pull-lp-source linux 4.4.0-18.34
Build from source with oldconfig and such
Enable all kind of debug for virtio
Add some checks where we expect it to fail
mkdir /home/ubuntu/
# not needed make INSTALL_
make INSTALL_
<kernel>
<cmdline>
Attach debugger as before and retrigger the bug
Ensure /home/ubuntu/
On boot my debug starts to work on the one device that gets innitialized on boot:
[ 3.557697] __virtqueue_
[ 3.559320] __virtqueue_
[ 3.560515] __virtqueue_
Prep issue:
sudo /usr/bin/testpmd --pci-blacklist 0000:00:03.0 --socket-mem 2048 -- --interactive --total-
* it might be worth to mention that nothing regarding the queues came by running testpmd - neither in console nor in gdb
Trigger hang:
sudo ethtool -L eth1 combined 3
__virtqueue_
__virtqueue_
[...]
With the debug we have we can check the vvq's status
BTW - the offset of that container_of is 0 - so we can just cast it :-/
$4 = {vq = {list = {next = 0xffff8800bb892b00, prev = 0xffff8801b7518
vdev = 0xffff8800bb892800, index = 8, num_free = 63, priv = 0x1c010}, vring = {num = 64, desc = 0xffff8801b7514000, avail = 0xffff8801b7514400,
used = 0xffff8801b7515
avail_
So it considers itself not broken.
But I've seen it run over the usually disabled (so we don't see it by default):
pr_debug("No more buffers in queue\n");
That depends on !more_used(vq)
Which is:
return vq->last_used_idx != virtio16_
0 != 0
(gdb) p ((struct vring_virtqueue *)0xffff8800bba
$19 = {num = 64, desc = 0xffff8801b7514000, avail = 0xffff8801b7514400, used = 0xffff8801b7515000}
(gdb) p *((struct vring_virtqueue *)0xffff8800bba
$21 = {flags = 0, idx = 0, ring = 0xffff8801b7515004}
(gdb) p *((struct vring_virtqueue *)0xffff8800bba
$22 = {flags = 1, idx = 1, ring = 0xffff8801b7514404}
(gdb) p *((struct vring_virtqueue *)0xffff8800bba
$23 = {addr = 3140568064, len = 48, flags = 4, next = 1}
0!=0 => false -> so more_used returns fals
But the call said !more_used, so virtqueue_get_buf returns NULL - and that is all it does "forever".
Christian Ehrhardt (paelzer) wrote : | #14 |
Christian Ehrhardt (paelzer) wrote : | #15 |
Before going into discussions how it "should" be I added more debug code and gatherered some good case vs bad case data.
First of all it is "ok" to have no more buffers.
I had a prink in a codepath that only triggers when !more_used triggers.
And I've seen plentry for all kind of idx values.
On adding virtio traffic it triggers a few times as well.
Eventually that is what the loop is for, to wait until there is ia buffer that it can get.
So things aren't broken if this triggers ever - but of course it is if it never changes.
IIRC: last_used is != vring_used->idx just means nothing happened since our last interaction (to be confirmed).
Good case:
Some !more_used might occur, but not related and not infintely
[ 393.542550] __virtqueue_
[ 394.097117] __virtqueue_
[ 394.097413] __virtqueue_
[...]
[ 394.449672] __virtqueue_
[ 394.452734] __virtqueue_
[ 394.455087] __virtqueue_
Done
Bad case (after DPDK ran):
Now both debug printk's trigger
I get a LOT of
[ 552.018862] __virtqueue_
Followed by a sequence like that in between
[ 554.157376] __virtqueue_
[ 554.158916] __virtqueue_
[ 554.160135] __virtqueue_
[ 554.161583] __virtqueue_
[ 554.162776] __virtqueue_
[ 554.164189] __virtqueue_
[...] (infinite loop)
Current assumption: DPDK disables something in the host part of the virtio device that makes the host no more response "correctly".
Via unbinding/binding the driver we can reinitialize that, but if not we will run into this hang.
Remember: we only initialize DPDK with testpmd, no load whatsoever is driven by it.
We likely need two fixes:
1. find what DPDK does "to" the device and avoid it
2. the kernel should give up after some number of retries or so and give up returning a fail (not good, but much better than hanging)
Christian Ehrhardt (paelzer) wrote : | #16 |
Tested latest upstream versions to be sure we not just miss a patch that already exists.
Bug still happens with linux-4.
Christian Ehrhardt (paelzer) wrote : | #17 |
Discussing with Thomas Monjalon revealed a set of post 2.2 patches.
These will no more let you intialize DPDK while a kernel driver - like virtio-pci is still bound.
I already proved before on this bug that reinitializing it to virtio-pci will properly set it up and make it workable again.
So I intend to backport and test those patches together with some more for the next upload.
This might need some more Doc updates and also will get rid of users accidentially killing their connection by failing to blacklist their main virtio device
Thanks to Thomas for identifying these.
@Martin - until then the proper "workaround" is to reinitialize via e.g.:
dpdk_nic_bind -b uio_pci_generic 0000:00:04.0
dpdk_nic_bind -b virtio-pci 0000:00:04.0
The attachment "printk debugging around the issue of never getting out of the loop in virtnet_
[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]
tags: | added: patch |
Christian Ehrhardt (paelzer) wrote : | #19 |
Now working - a device still "in touch" by the kernel will be rejected to be used.
EAL: probe driver: 1af4:1000 rte_virtio_pmd
EAL: Error - exiting with code: 1
Cause: Requested device 0000:00:05.0 cannot be used
You have to at least unbind them now to use them with DPDK:
sudo dpdk_nic_bind -u 0000:00:04.0
You can assign them to uio_pci_generic if you want, but it is not required
sudo dpdk_nic_bind -b uio_pci_generic 0000:00:05.0
Using testpmd now on those works as before (you still need to blacklist/whitelist as it can't know which ones to use).
Then reassigning the kernel driver to use them "normally" again
sudo dpdk_nic_bind -b virtio-pci 0000:00:04.0
sudo dpdk_nic_bind -b virtio-pci 0000:00:05.0
After this re-init I can properly use them again e.g.:
sudo ethtool -L ens5 combined 4
Christian Ehrhardt (paelzer) wrote : | #20 |
I'll try to make the rejecting error more "readable" and check the docs to still match.
Other than that it will be in the next upload for DPDK.
Until then (at your own risk) one can try https:/
Christian Ehrhardt (paelzer) wrote : | #21 |
Xenial is released, so we are back in SRU mode.
Therefore I add the matching SRU Template for the upload of 2.2.0ubuntu8 which is in the unapproved queue atm.
[Impact]
* using devices by DPDK and the kernel at once drives the system into hangs
* the fix avoids using devices in DPDK that are still in use by the kernel
* fix is a backport form upstream accepted patch
[Test Case]
* run dpdk in a guest on virtio-pci devices
* afterwards do anything that touches the queues of the device like ethtool -L
[Regression Potential]
* Some existing setups might no more work if they set up DPDK on kernel owned devices. But that is intentional as they are only one step away from breaking their systems
* The documentation in the server guide has been adapted to reflect the new needs (merge proposal waits for ack)
* also the comments and examples in the config files have been adapted to reflect the new style
* passed ADT tests on i368/amd64/
Hello Thiago, or anyone else affected,
Accepted dpdk into xenial-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-
Further information regarding the verification process can be found at https:/
tags: | added: verification-needed |
Christian Ehrhardt (paelzer) wrote : | #23 |
FYI - Verified in Proposed.
Next I need to prep some Y tests to reasonably request an upload to Yakkety to allow migration as Martin indicated.
tags: | added: verification-done-xenial |
tags: |
added: verification-done removed: verification-needed |
Launchpad Janitor (janitor) wrote : | #24 |
This bug was fixed in the package dpdk - 2.2.0-0ubuntu9
---------------
dpdk (2.2.0-0ubuntu9) yakkety; urgency=medium
* d/p/ubuntu-
- don't let DPDK initialize virtio devices still in use by the kernel
- this avoids conflicts between kernel and dpdk usage of those devices
- an admin now has to unbind/bind devices as on physical hardware
- this is in the dpdk 16.04 release and delta can then be dropped
- d/dpdk-
- d/dpdk.interfaces update for changes in virtio-pci handling
* d/p/ubuntu-
- call vhost_destroy_
- this likely is in the dpdk 16.07 release and delta can then be dropped
* d/p/ubuntu-
- when vhost_user sockets are created they are owner:group of the process
- the DPDK api to create those has no way to specify owner:group
- to fix that without breaking the API and potential workaround code in
consumers of the library like openvswitch 2.6 for example. This patch
adds an EAL commandline option to specify user:group created vhost_user
sockets should have.
-- Christian Ehrhardt <email address hidden> Wed, 27 Apr 2016 07:52:48 -0500
Changed in dpdk (Ubuntu): | |
status: | Confirmed → Fix Released |
Thiago Martins (martinx) wrote : | #25 |
Just for the record, after upgrading DPDK (proposed repo), OpenvSwitch+DPDK isn't not starting up anymore when inside of a VM...
I am double checking everything again...
Thiago Martins (martinx) wrote : Re: [Bug 1570195] Re: Net tools cause kernel soft lockup after DPDK touched VirtIO-pci devices | #26 |
Never mind, it is working now...
Launchpad Janitor (janitor) wrote : | #27 |
This bug was fixed in the package dpdk - 2.2.0-0ubuntu8
---------------
dpdk (2.2.0-0ubuntu8) xenial; urgency=medium
* d/p/ubuntu-
- don't let DPDK initialize virtio devices still in use by the kernel
- this avoids conflicts between kernel and dpdk usage of those devices
- an admin now has to unbind/bind devices as on physical hardware
- this is in the dpdk 16.04 release and delta can then be dropped
- d/dpdk-
- d/dpdk.interfaces update for changes in virtio-pci handling
* d/p/ubuntu-
- call vhost_destroy_
- this likely is in the dpdk 16.07 release and delta can then be dropped
* d/p/ubuntu-
- when vhost_user sockets are created they are owner:group of the process
- the DPDK api to create those has no way to specify owner:group
- to fix that without breaking the API and potential workaround code in
consumers of the library like openvswitch 2.6 for example. This patch
adds an EAL commandline option to specify user:group created vhost_user
sockets should have.
-- Christian Ehrhardt <email address hidden> Mon, 25 Apr 2016 11:42:40 +0200
Changed in dpdk (Ubuntu Xenial): | |
status: | New → Fix Released |
Chris J Arges (arges) wrote : Update Released | #28 |
The verification of the Stable Release Update for dpdk has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
Changed in linux (Ubuntu): | |
status: | Confirmed → Invalid |
Changed in linux (Ubuntu Xenial): | |
status: | New → Invalid |
This change was made by a bot.