On Fri, 04/22 18:55, Matthew Schumacher wrote: > Running master as of this morning 4/22 and I'm not getting any more > crashes, and I'm flat beating on it. RC3 still crashes on me, so > whatever the fix is, came after rc3. Matthew, It was bcd82a9..ab27c3b from last Friday (yes, after -rc3). Thank you so much for your reporting and testing. Fam > > -- > You received this bug notification because you are a member of qemu- > devel-ml, which is subscribed to QEMU. > https://bugs.launchpad.net/bugs/1570134 > > Title: > While committing snapshot qemu crashes with SIGABRT > > Status in QEMU: > New > > Bug description: > Information: > > OS: Slackware64-Current > Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 > Compiled using: > > CFLAGS="-O2 -fPIC" \ > CXXFLAGS="-O2 -fPIC" \ > LDFLAGS="-L/usr/lib64" \ > ./configure \ > --prefix=/usr \ > --sysconfdir=/etc \ > --localstatedir=/var \ > --libdir=/usr/lib64 \ > --enable-spice \ > --enable-kvm \ > --enable-glusterfs \ > --enable-libiscsi \ > --enable-libusb \ > --target-list=x86_64-softmmu,i386-softmmu \ > --enable-debug > > Source: qemu-2.5.1.tar.bz2 > > Running as: > > /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine > pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp > 2,sockets=2,cores=1,threads=1 -uuid > 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults > -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc > base=localtime,clock=vm,driftfix=slew -global kvm- > pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive > file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- > virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive > =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id > =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive > =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev > tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net > pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 > -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on > > File system: zfs v0.6.5.6 > > While running: > virsh blockcommit test1 vda --active --pivot --verbose > > VM running very heavy IO load > > GDB reporting: > > #0 0x00007fd80132c3f8 in raise () at /lib64/libc.so.6 > #1 0x00007fd80132dffa in abort () at /lib64/libc.so.6 > #2 0x00007fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 > #3 0x00007fd801324cc2 in () at /lib64/libc.so.6 > #4 0x000055d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 > __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" > #5 0x000055d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 > to_replace = 0x55d993ed9c10 > s = 0x55d993fef830 > data = 0x55d999bbefe0 > replace_aio_context = > src = 0x55d993ed9c10 > #6 0x000055d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 > data = 0x55d9940ce850 > aio_context = 0x55d9931a2610 > #7 0x000055d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 > bh = > bhp = > next = 0x55d99440f910 > ret = 1 > #8 0x000055d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 > node = > progress = false > #9 0x000055d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 > ctx = > #10 0x00007fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 > #11 0x000055d9918db03b in main_loop_wait () at main-loop.c:211 > context = 0x55d9931a3200 > pfds = > ret = 0 > spin_counter = 1 > ret = 0 > timeout = 4294967295 > timeout_ns = > #12 0x000055d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 > ret = 0 > spin_counter = 1 > ret = 0 > timeout = 4294967295 > timeout_ns = > #13 0x000055d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 > ret = 0 > timeout = 4294967295 > timeout_ns = > #14 0x000055d991679cc4 in main () at vl.c:1923 > nonblocking = > last_io = 2 > i = > snapshot = > linux_boot = > initrd_filename = > kernel_filename = > kernel_cmdline = > boot_order = > boot_once = > ds = > cyls = > heads = > secs = > translation = > hda_opts = > opts = > machine_opts = > icount_opts = > olist = > optind = 49 > optarg = 0x7fffc6d27f43 "timestamp=on" > loadvm = > machine_class = 0x55d993194d10 > cpu_model = > vga_model = 0x0 > qtest_chrdev = > qtest_log = > pid_file = > incoming = > defconfig = > userconfig = false > log_mask = > log_file = > trace_events = > trace_file = > maxram_size = > ram_slots = > vmstate_dump_file = > main_loop_err = 0x0 > err = 0x0 > __func__ = "main" > #15 0x000055d991679cc4 in main (argc=, argv=, envp=) at vl.c:4699 > i = > snapshot = > linux_boot = > initrd_filename = > kernel_filename = > kernel_cmdline = > boot_order = > boot_once = > ds = > cyls = > heads = > secs = > translation = > hda_opts = > opts = > machine_opts = > icount_opts = > olist = > optind = 49 > optarg = 0x7fffc6d27f43 "timestamp=on" > loadvm = > machine_class = 0x55d993194d10 > cpu_model = > vga_model = 0x0 > qtest_chrdev = > qtest_log = > pid_file = > incoming = > defconfig = > userconfig = false > log_mask = > log_file = > trace_events = > trace_file = > maxram_size = > ram_slots = > vmstate_dump_file = > main_loop_err = 0x0 > err = 0x0 > __func__ = "main" > > > I can reproduce this at will, and can provide more information per a > dev's request. > > To manage notifications about this bug go to: > https://bugs.launchpad.net/qemu/+bug/1570134/+subscriptions >