virtio_scsi_ctx_check failed when detach virtio_scsi disk

Bug #1836855 reported by 贞贵李
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
QEMU
Expired
Undecided
Unassigned

Bug Description

I found a problem that virtio_scsi_ctx_check failed when detaching virtio_scsi disk. The bt is below:

(gdb) bt
#0 0x0000ffffb02e1bd0 in raise () from /lib64/libc.so.6
#1 0x0000ffffb02e2f7c in abort () from /lib64/libc.so.6
#2 0x0000ffffb02db124 in __assert_fail_base () from /lib64/libc.so.6
#3 0x0000ffffb02db1a4 in __assert_fail () from /lib64/libc.so.6
#4 0x00000000004eb9a8 in virtio_scsi_ctx_check (d=d@entry=0xc70d790, s=<optimized out>, s=<optimized out>)
    at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:243
#5 0x00000000004ec87c in virtio_scsi_handle_cmd_req_prepare (s=s@entry=0xd27a7a0, req=req@entry=0xafc4b90)
    at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:553
#6 0x00000000004ecc20 in virtio_scsi_handle_cmd_vq (s=0xd27a7a0, vq=0xd283410)
    at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:588
#7 0x00000000004eda20 in virtio_scsi_data_plane_handle_cmd (vdev=0x0, vq=0xffffae7a6f98)
    at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi-dataplane.c:57
#8 0x0000000000877254 in aio_dispatch (ctx=0xac61010) at util/aio-posix.c:323
#9 0x00000000008773ec in aio_poll (ctx=0xac61010, blocking=true) at util/aio-posix.c:472
#10 0x00000000005cd7cc in iothread_run (opaque=0xac5e4b0) at iothread.c:49
#11 0x000000000087a8b8 in qemu_thread_start (args=0xac61360) at util/qemu-thread-posix.c:495
#12 0x00000000008a04e8 in thread_entry_for_hotfix (pthread_cb=0x0) at uvp/hotpatch/qemu_hotpatch_helper.c:579
#13 0x0000ffffb041c8bc in start_thread () from /lib64/libpthread.so.0
#14 0x0000ffffb0382f8c in thread_start () from /lib64/libc.so.6

assert(blk_get_aio_context(d->conf.blk) == s->ctx) failed.

I think this patch (https://git.qemu.org/?p=qemu.git;a=commitdiff;h=a6f230c8d13a7ff3a0c7f1097412f44bfd9eff0b) introduce this problem.

commit a6f230c8d13a7ff3a0c7f1097412f44bfd9eff0b move blockbackend back to main AioContext on unplug. It set the AioContext of

SCSIDevice to the main AioContex, but s->ctx is still the iothread AioContext. Is this a bug?

Revision history for this message
Stefan Hajnoczi (stefanha) wrote : Re: [Qemu-devel] [Bug 1836855] [NEW] virtio_scsi_ctx_check failed when detach virtio_scsi disk
Download full text (4.9 KiB)

On Wed, Jul 17, 2019 at 08:20:35AM -0000, 贞贵李 wrote:
> Public bug reported:
>
> I found a problem that virtio_scsi_ctx_check failed when detaching
> virtio_scsi disk. The bt is below:
>
> (gdb) bt
> #0 0x0000ffffb02e1bd0 in raise () from /lib64/libc.so.6
> #1 0x0000ffffb02e2f7c in abort () from /lib64/libc.so.6
> #2 0x0000ffffb02db124 in __assert_fail_base () from /lib64/libc.so.6
> #3 0x0000ffffb02db1a4 in __assert_fail () from /lib64/libc.so.6
> #4 0x00000000004eb9a8 in virtio_scsi_ctx_check (d=d@entry=0xc70d790, s=<optimized out>, s=<optimized out>)
> at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:243
> #5 0x00000000004ec87c in virtio_scsi_handle_cmd_req_prepare (s=s@entry=0xd27a7a0, req=req@entry=0xafc4b90)
> at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:553
> #6 0x00000000004ecc20 in virtio_scsi_handle_cmd_vq (s=0xd27a7a0, vq=0xd283410)
> at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi.c:588
> #7 0x00000000004eda20 in virtio_scsi_data_plane_handle_cmd (vdev=0x0, vq=0xffffae7a6f98)
> at /Images/lzg/code/710/qemu-2.8.1/hw/scsi/virtio-scsi-dataplane.c:57
> #8 0x0000000000877254 in aio_dispatch (ctx=0xac61010) at util/aio-posix.c:323
> #9 0x00000000008773ec in aio_poll (ctx=0xac61010, blocking=true) at util/aio-posix.c:472
> #10 0x00000000005cd7cc in iothread_run (opaque=0xac5e4b0) at iothread.c:49
> #11 0x000000000087a8b8 in qemu_thread_start (args=0xac61360) at util/qemu-thread-posix.c:495
> #12 0x00000000008a04e8 in thread_entry_for_hotfix (pthread_cb=0x0) at uvp/hotpatch/qemu_hotpatch_helper.c:579
> #13 0x0000ffffb041c8bc in start_thread () from /lib64/libpthread.so.0
> #14 0x0000ffffb0382f8c in thread_start () from /lib64/libc.so.6
>
> assert(blk_get_aio_context(d->conf.blk) == s->ctx) failed.
>
> I think this patch
> (https://git.qemu.org/?p=qemu.git;a=commitdiff;h=a6f230c8d13a7ff3a0c7f1097412f44bfd9eff0b)
> introduce this problem.
>
> commit a6f230c8d13a7ff3a0c7f1097412f44bfd9eff0b move blockbackend back
> to main AioContext on unplug. It set the AioContext of
>
> SCSIDevice to the main AioContex, but s->ctx is still the iothread
> AioContext. Is this a bug?

The backtrace shows that virtqueue processing is happening in the
IOThread. This is expected so now the question is why the
BlockBackend's AioContext is the main AioContext.

Can you share steps for reproducing this bug?

Thanks!

> ** Affects: qemu
> Importance: Undecided
> Status: New
>
> --
> You received this bug notification because you are a member of qemu-
> devel-ml, which is subscribed to QEMU.
> https://bugs.launchpad.net/bugs/1836855
>
> Title:
> virtio_scsi_ctx_check failed when detach virtio_scsi disk
>
> Status in QEMU:
> New
>
> Bug description:
> I found a problem that virtio_scsi_ctx_check failed when detaching
> virtio_scsi disk. The bt is below:
>
> (gdb) bt
> #0 0x0000ffffb02e1bd0 in raise () from /lib64/libc.so.6
> #1 0x0000ffffb02e2f7c in abort () from /lib64/libc.so.6
> #2 0x0000ffffb02db124 in __assert_fail_base () from /lib64/libc.so.6
> #3 0x0000ffffb02db1a4 in __assert_fail () from /lib64/libc.so.6
> #4 0x00000000004eb9a8 in virtio_scsi_...

Read more...

Revision history for this message
Thomas Huth (th-huth) wrote :

The QEMU project is currently considering to move its bug tracking to
another system. For this we need to know which bugs are still valid
and which could be closed already. Thus we are setting older bugs to
"Incomplete" now.

If you still think this bug report here is valid, then please switch
the state back to "New" within the next 60 days, otherwise this report
will be marked as "Expired". Or please mark it as "Fix Released" if
the problem has been solved with a newer version of QEMU already.

Thank you and sorry for the inconvenience.

Changed in qemu:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for QEMU because there has been no activity for 60 days.]

Changed in qemu:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.