2019-04-06 11:51:33 |
Dan Streetman |
bug |
|
|
added bug |
2019-04-06 11:51:49 |
Dan Streetman |
nominated for series |
|
Ubuntu Xenial |
|
2019-04-06 11:51:49 |
Dan Streetman |
bug task added |
|
qemu (Ubuntu Xenial) |
|
2019-04-06 11:51:49 |
Dan Streetman |
nominated for series |
|
Ubuntu Disco |
|
2019-04-06 11:51:49 |
Dan Streetman |
bug task added |
|
qemu (Ubuntu Disco) |
|
2019-04-06 11:51:49 |
Dan Streetman |
nominated for series |
|
Ubuntu Bionic |
|
2019-04-06 11:51:49 |
Dan Streetman |
bug task added |
|
qemu (Ubuntu Bionic) |
|
2019-04-06 11:51:49 |
Dan Streetman |
nominated for series |
|
Ubuntu Cosmic |
|
2019-04-06 11:51:49 |
Dan Streetman |
bug task added |
|
qemu (Ubuntu Cosmic) |
|
2019-04-06 11:51:56 |
Dan Streetman |
qemu (Ubuntu Disco): assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-06 11:51:58 |
Dan Streetman |
qemu (Ubuntu Cosmic): assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-06 11:51:59 |
Dan Streetman |
qemu (Ubuntu Bionic): assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-06 11:52:01 |
Dan Streetman |
qemu (Ubuntu Xenial): assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-06 11:52:05 |
Dan Streetman |
qemu (Ubuntu Disco): importance |
Undecided |
Medium |
|
2019-04-06 11:52:08 |
Dan Streetman |
qemu (Ubuntu Cosmic): importance |
Undecided |
Medium |
|
2019-04-06 11:52:10 |
Dan Streetman |
qemu (Ubuntu Bionic): importance |
Undecided |
Medium |
|
2019-04-06 11:52:15 |
Dan Streetman |
qemu (Ubuntu Disco): status |
New |
In Progress |
|
2019-04-06 11:52:17 |
Dan Streetman |
qemu (Ubuntu Cosmic): status |
New |
In Progress |
|
2019-04-06 11:52:19 |
Dan Streetman |
qemu (Ubuntu Bionic): status |
New |
In Progress |
|
2019-04-06 11:52:21 |
Dan Streetman |
qemu (Ubuntu Xenial): status |
New |
In Progress |
|
2019-04-06 11:52:23 |
Dan Streetman |
qemu (Ubuntu Xenial): importance |
Undecided |
Medium |
|
2019-04-06 12:55:56 |
Dan Streetman |
description |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup()). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well. |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup()). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. |
|
2019-04-06 12:57:48 |
Dan Streetman |
description |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup()). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup()). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. |
|
2019-04-06 15:16:26 |
Dan Streetman |
nominated for series |
|
Ubuntu Trusty |
|
2019-04-06 15:16:26 |
Dan Streetman |
bug task added |
|
qemu (Ubuntu Trusty) |
|
2019-04-06 15:16:32 |
Dan Streetman |
qemu (Ubuntu Trusty): status |
New |
In Progress |
|
2019-04-06 15:16:35 |
Dan Streetman |
qemu (Ubuntu Trusty): importance |
Undecided |
Medium |
|
2019-04-06 15:16:38 |
Dan Streetman |
qemu (Ubuntu Trusty): assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-06 15:16:57 |
Dan Streetman |
bug task added |
|
qemu |
|
2019-04-06 15:17:03 |
Dan Streetman |
qemu: status |
New |
In Progress |
|
2019-04-06 15:17:06 |
Dan Streetman |
qemu: assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-11 20:54:48 |
Dan Streetman |
description |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup()). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup(), but vhost_net_cleanup() does nothing else). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
However, the flags are very unintrusive, and only in the shutdown path (of a vhost_dev or vhost_net), and are unlikely to cause any regressions.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. |
|
2019-04-15 19:26:22 |
Dan Streetman |
description |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup(), but vhost_net_cleanup() does nothing else). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
However, the flags are very unintrusive, and only in the shutdown path (of a vhost_dev or vhost_net), and are unlikely to cause any regressions.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds a flag to prevent repeated calls to vhost_net_stop(). This also prevents any calls to vhost_net_cleanup() from net_vhost_user_event(). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. Additionally, if thread B's call to vhost_dev_cleanup() comes before thread A finishes vhost_net_stop(), then that will call vhost_dev_stop() and vhost_disable_notifiers() which both try to access things that have been freed/cleared/disabled by vhost_dev_cleanup(). |
|
2019-04-23 09:12:02 |
Dan Streetman |
qemu (Ubuntu Disco): status |
In Progress |
Fix Released |
|
2019-04-23 09:12:08 |
Dan Streetman |
qemu (Ubuntu): status |
In Progress |
Fix Released |
|
2019-04-23 09:12:11 |
Dan Streetman |
qemu: status |
In Progress |
Fix Released |
|
2019-04-23 09:12:13 |
Dan Streetman |
qemu (Ubuntu Cosmic): status |
In Progress |
Fix Released |
|
2019-04-23 09:12:15 |
Dan Streetman |
qemu (Ubuntu Bionic): status |
In Progress |
Fix Released |
|
2019-04-23 09:12:19 |
Dan Streetman |
qemu (Ubuntu Trusty): status |
In Progress |
Won't Fix |
|
2019-04-23 09:12:26 |
Dan Streetman |
qemu (Ubuntu Disco): assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:12:27 |
Dan Streetman |
qemu (Ubuntu Cosmic): assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:12:29 |
Dan Streetman |
qemu (Ubuntu Xenial): assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:12:31 |
Dan Streetman |
qemu (Ubuntu Bionic): assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:12:33 |
Dan Streetman |
qemu: assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:12:36 |
Dan Streetman |
qemu (Ubuntu): assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:12:39 |
Dan Streetman |
qemu (Ubuntu Xenial): assignee |
|
Dan Streetman (ddstreet) |
|
2019-04-23 09:12:41 |
Dan Streetman |
qemu (Ubuntu Trusty): assignee |
Dan Streetman (ddstreet) |
|
|
2019-04-23 09:17:14 |
Dan Streetman |
description |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds a flag to prevent repeated calls to vhost_net_stop(). This also prevents any calls to vhost_net_cleanup() from net_vhost_user_event(). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase). However, this appears to still apply upstream, and I am sending a patch to the qemu list to patch upstream as well.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. Additionally, if thread B's call to vhost_dev_cleanup() comes before thread A finishes vhost_net_stop(), then that will call vhost_dev_stop() and vhost_disable_notifiers() which both try to access things that have been freed/cleared/disabled by vhost_dev_cleanup(). |
[impact]
on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu):
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
#1 0x00005636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
#2 0x00005636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
#3 0x00005636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70)
at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
#4 0x00005636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
#5 0x00005636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
#6 0x00005636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
#7 0x00005636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
#8 0x00005636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
#9 0x00005636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
#10 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
#11 0x00005636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
#12 0x00005636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
#13 0x00005636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
#14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
#15 0x00005636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
#16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683
[test case]
unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown.
I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this.
[regression potential]
the change adds a flag to prevent repeated calls to vhost_net_stop(). This also prevents any calls to vhost_net_cleanup() from net_vhost_user_event(). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net.
[other info]
this was originally seen in the 2.5 version of qemu - specifically, the UCA version in trusty-mitaka (which uses the xenial qemu codebase).
After discussion upstream, it appears this was fixed upstream by commit e7c83a885f8, which is included starting in version 2.9. However, this commit depends on at least commit 5345fdb4467, and likely more other previous commits, which make widespread code changes and are unsuitable to backport. Therefore this seems like it should be specifically worked around in the Xenial qemu codebase.
The specific race condition for this (in the qemu 2.5 code version) is:
as shown in above bt, thread A starts shutting down qemu, e.g.:
vm_stop->do_vm_stop->vm_state_notify
virtio_set_status
virtio_net_set_status
virtio_net_vhost_status
in this function, code gets to an if-else check for (!n->vhost_started), which is false (i.e. vhost_started is true) and enters the else code block, which calls vhost_net_stop() and then sets n->vhost_started to false.
While thread A is inside vhost_net_stop(), thread B is triggered by
the vhost net chr handler with a user event and calls:
net_vhost_user_event
qmp_set_link (from case CHR_EVENT_CLOSED)
virtio_net_set_link_status (via ->link_status_changed)
virtio_net_set_status
virtio_net_vhost_status
notice thread B has now reached the same function that thread A is in; since the checks in the function have not changed, thread B follows the same path that thread A followed, and enters vhost_net_stop().
Since thread A has already shut down and cleaned up some of the internals, once thread B starts trying to also clean up things, it segfaults as the shown in the bt.
Avoiding only this duplicate call to vhost_net_stop() is required, but not enough - let's continue to look at what thread B does after its call to qmp_set_link() returns:
net_vhost_user_event
vhost_user_stop
vhost_net_cleanup
vhost_dev_cleanup
However, in main() qemu registers atexit(net_cleanup()), which does:
net_cleanup
qemu_del_nic (or qemu_del_net_client, depending on ->type)
qemu_cleanup_net_client
vhost_user_cleanup (via ->cleanup)
vhost_net_cleanup
vhost_dev_cleanup
and the duplicate vhost_dev_cleanup fails assertions since things were already cleaned up. Additionally, if thread B's call to vhost_dev_cleanup() comes before thread A finishes vhost_net_stop(), then that will call vhost_dev_stop() and vhost_disable_notifiers() which both try to access things that have been freed/cleared/disabled by vhost_dev_cleanup(). |
|
2019-04-23 10:21:52 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~ddstreet/ubuntu/+source/qemu/+git/qemu/+merge/366392 |
|
2019-04-24 14:00:10 |
Robie Basak |
qemu (Ubuntu Xenial): status |
In Progress |
Fix Committed |
|
2019-04-24 14:00:12 |
Robie Basak |
bug |
|
|
added subscriber Ubuntu Stable Release Updates Team |
2019-04-24 14:00:14 |
Robie Basak |
bug |
|
|
added subscriber SRU Verification |
2019-04-24 14:00:19 |
Robie Basak |
tags |
|
verification-needed verification-needed-xenial |
|
2019-04-24 15:40:33 |
Corey Bryant |
bug task added |
|
cloud-archive |
|
2019-04-24 15:40:43 |
Corey Bryant |
nominated for series |
|
cloud-archive/mitaka |
|
2019-04-24 15:40:43 |
Corey Bryant |
bug task added |
|
cloud-archive/mitaka |
|
2019-04-24 15:40:51 |
Corey Bryant |
cloud-archive/mitaka: importance |
Undecided |
Medium |
|
2019-04-24 15:40:57 |
Corey Bryant |
cloud-archive/mitaka: status |
New |
Triaged |
|
2019-04-24 16:03:45 |
Corey Bryant |
nominated for series |
|
cloud-archive/ocata |
|
2019-04-24 16:03:45 |
Corey Bryant |
bug task added |
|
cloud-archive/ocata |
|
2019-04-24 16:04:17 |
Corey Bryant |
cloud-archive/ocata: importance |
Undecided |
Medium |
|
2019-04-24 16:04:17 |
Corey Bryant |
cloud-archive/ocata: status |
New |
Triaged |
|
2019-04-24 16:04:38 |
Corey Bryant |
cloud-archive: status |
New |
Fix Released |
|
2019-04-24 16:39:05 |
Dan Streetman |
attachment added |
|
lp1823458-ocata.debdiff https://bugs.launchpad.net/cloud-archive/+bug/1823458/+attachment/5258683/+files/lp1823458-ocata.debdiff |
|
2019-04-24 20:53:10 |
Corey Bryant |
cloud-archive/mitaka: status |
Triaged |
Fix Committed |
|
2019-04-24 20:53:11 |
Corey Bryant |
tags |
verification-needed verification-needed-xenial |
verification-mitaka-needed verification-needed verification-needed-xenial |
|
2019-04-24 20:54:35 |
Corey Bryant |
cloud-archive/ocata: status |
Triaged |
Fix Committed |
|
2019-04-24 20:54:38 |
Corey Bryant |
tags |
verification-mitaka-needed verification-needed verification-needed-xenial |
verification-mitaka-needed verification-needed verification-needed-xenial verification-ocata-needed |
|
2019-04-30 10:08:08 |
Dan Streetman |
tags |
verification-mitaka-needed verification-needed verification-needed-xenial verification-ocata-needed |
verification-done verification-done-xenial verification-mitaka-done verification-ocata-done |
|
2019-05-07 19:11:56 |
Brian Murray |
qemu (Ubuntu Xenial): status |
Fix Committed |
Incomplete |
|
2019-05-10 14:40:19 |
Dan Streetman |
qemu (Ubuntu Xenial): status |
Incomplete |
Fix Committed |
|
2019-05-13 10:54:48 |
Łukasz Zemczak |
removed subscriber Ubuntu Stable Release Updates Team |
|
|
|
2019-05-13 11:04:51 |
Launchpad Janitor |
qemu (Ubuntu Xenial): status |
Fix Committed |
Fix Released |
|
2019-05-13 18:37:55 |
Corey Bryant |
cloud-archive/ocata: status |
Fix Committed |
Fix Released |
|
2019-05-13 18:40:15 |
Corey Bryant |
cloud-archive/mitaka: status |
Fix Committed |
Fix Released |
|