Cannot complete snapshot if read-only backing store is opened by another VM

Bug #1837869 reported by Louis Bouchard on 2019-07-25
18
This bug affects 2 people
Affects Status Importance Assigned to Milestone
qemu (Ubuntu)
Undecided
Unassigned
Bionic
Undecided
Unassigned
Disco
Undecided
Unassigned

Bug Description

[Impact]

 * Certain actions need to open/reopen files e.g. when doing a snapshot it
   will reopen the old file as backing image. Those cases can fail due to
   a bug in the flag handling.

 * The fix is a backport of an upstream fix that fixes the options that
   were lost on recursing through images.

[Test Case]

 * There is a great test description in the initial report which I
   extended to a script. Run the attached script should work once the fix
   is applied.
   Summary: set up image files, do a snapshot via QMP.

[Regression Potential]

 * There were a lot of changes in this area and chances are that there are
   side effects by only picking this change compared to adding up all the
   other changes. In my testing I found the isolated single patch to work
   fine for the case shown here and through regression tests. While at the
   same time the bigger set of changes is harder to review/understand and
   had some new hangs (by more side effects). So there is a risk, but I'd
   hope we chose the most sane an reviewable approach.

[Other Info]

 * n/a

---

On Bionic, Qemu complains that it cannot acquire write lock when commiting a snapshot if a read-only backing-store is opened by another qemu process. This behavior does not happen with version 2.12 in Cosmic.

Reproducer
==========
Create two QCOW2 containers sharing the same base file as a backing store :

           base.qcow2
                |
      +---------+---------+
      |-------------------|
middle-vm01.img middle-vm02.img
      |-------------------|
top-vm01.img ---- top-vm02.img

# cat mkimage
#!/bin/bash
qemu-img create -f qcow2 base.qcow2 10G
qemu-img create -f qcow2 -b base.qcow2 middle-vm01.img 10G
qemu-img create -f qcow2 -b base.qcow2 middle-vm02.img 10G
qemu-img create -f qcow2 -b middle-vm01.img top-vm01.img 10G
qemu-img create -f qcow2 -b middle-vm01.img top-vm02.img 10G

Start two VM each using its own top-vm{id}.img

# cat runvm
#!/bin/bash

qemu-system-x86_64 -nographic -qmp unix:./qmp-1.sock,server,nowait -enable-kvm -device virtio-scsi-pci,id=scsi -device sga -nodefaults -monitor none -m 256M -drive file=./top-vm01.img,if=virtio,id=disk0 -smp 1 -smbios type=1,manufacturer=test&
qemu-system-x86_64 -nographic -qmp unix:./qmp-2.sock,server,nowait -enable-kvm -device virtio-scsi-pci,id=scsi -device sga -nodefaults -monitor none -m 256M -drive file=./top-vm02.img,if=virtio,id=disk0 -smp 1 -smbios type=1,manufacturer=test&

Create a snapshot

./scripts/qmp/qmp-shell ./qmp-1.sock
Welcome to the QMP low-level shell!
Connected to QEMU 2.11.1

(QEMU) blockdev-snapshot-sync device=disk0 snapshot-file=tmp.qcow2 format=qcow2
Formatting 'tmp.qcow2', fmt=qcow2 size=10737418240 backing_file=./top-vm01.img backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
{"return": {}}

Commit the snapshot
(QEMU) block-commit device=disk0 base=top-vm01.img
{"error": {"class": "GenericError", "desc": "Failed to get \"write\" lock"}}

Expected Behavior
=================
The commit should complete succesfully as the base.img backing store is opened read-only so no write lock is required

Current Behavior
================
The commit fails with "Failed to get "write" lock

Louis Bouchard (louis) on 2019-07-25
Changed in qemu (Ubuntu Bionic):
status: New → Confirmed
Changed in qemu (Ubuntu Disco):
status: New → Fix Released
Paride Legovini (paride) on 2019-07-29
tags: added: server-triage-discuss

Hi Luis o/,
glad to see you again.

This was a huge source of issues back when added in Bionics qemu.
I'm not wondering there were issues that still needed to be uncovered.

You flagged it as fixed in Disco, do you happen to know a particular patch already or just that the behavior doesn't appear there for you when testing?

tags: removed: server-triage-discuss

I'll try to recreate this later on, thanks for the detailed test steps

description: updated
description: updated
description: updated
description: updated

Confirmed by testing with the steps provided.
I'll check git what changes we have had in that regard.

Changed in qemu (Ubuntu Bionic):
status: Confirmed → Triaged
Clément LAFORET (sheepkiller) wrote :

Hi Christian,

I'm a Louis' coworker and I also worked on this issue.
We first did some tests starting from 2.12 to find a workaround with qemu-img. It seemed that this version of qemu was immune to our bug. Then, we backport packages from cosmic and reproduce our production issue.
I'm not familiar enough to qemu code and I was unbale to find the commit which fix the issue.

Thanks

Clément LAFORET (sheepkiller) wrote :

Hi again,
Here's an errata.

It's not :
Then, we backport packages from cosmic and reproduce our production issue.

But
Then, we backport packages from cosmic and try to reproduce our production issue.

And qemu worked as expected

So it is even fixed in 2.12 already, and not only later in 3.1 in Disco.
Thanks Clement, that reduces the candidates for commits that we look for.

P.S. seeing that nic, poor little sheeps :-/ sigh

Certainly not one of the cases of the initial discussions [1][2].
You already eliminated [2] with a pure qemu testcase and snapshots off the same base image were not mentioned in [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1378241
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1378242

I saw nothing too obvious (by subject/description) in the commits between 2.11 and 2.12.
I'll throw things at gdb to identify all involved areas and then check git history on that a bit deeper (but only later today).

Download full text (5.8 KiB)

#0 raw_handle_perm_lock (bs=0x560aad12b550, op=RAW_PL_PREPARE, new_perm=11, new_shared=21, errp=0x7ffc6ff54090) at ./block/file-posix.c:722
#1 0x0000560aac2b9e8e in bdrv_check_perm (bs=bs@entry=0x560aad12b550, q=0x560aadd834a0, q@entry=0x90af93e91d77c000, cumulative_perms=11, cumulative_shared_perms=<optimized out>,
    ignore_children=ignore_children@entry=0x560aad0c7e70, errp=errp@entry=0x7ffc6ff54090) at ./block.c:1655
#2 0x0000560aac2b9d05 in bdrv_check_update_perm (bs=0x560aad12b550, q=0x90af93e91d77c000, q@entry=0x560aadd834a0, new_used_perm=new_used_perm@entry=11,
    new_shared_perm=new_shared_perm@entry=21, ignore_children=ignore_children@entry=0x560aad0c7e70, errp=errp@entry=0x7ffc6ff54090) at ./block.c:1841
#3 0x0000560aac2b9f54 in bdrv_child_check_perm (errp=0x7ffc6ff54090, ignore_children=0x560aad0c7e70, shared=<optimized out>, perm=11, q=0x560aadd834a0, c=0x560aad0566a0) at ./block.c:1854
#4 bdrv_check_perm (bs=0x560aad125250, bs@entry=0xb, q=0x560aadd834a0, q@entry=0x90af93e91d77c000, cumulative_perms=1, cumulative_shared_perms=21,
    ignore_children=ignore_children@entry=0x560aad03e700, errp=0x7ffc6ff54090, errp@entry=0x15) at ./block.c:1671
#5 0x0000560aac2b9d05 in bdrv_check_update_perm (bs=0xb, q=0x90af93e91d77c000, q@entry=0x560aadd834a0, new_used_perm=new_used_perm@entry=1, new_shared_perm=new_shared_perm@entry=21,
    ignore_children=ignore_children@entry=0x560aad03e700, errp=0x15, errp@entry=0x7ffc6ff54090) at ./block.c:1841
#6 0x0000560aac2b9f54 in bdrv_child_check_perm (errp=0x7ffc6ff54090, ignore_children=0x560aad03e700, shared=<optimized out>, perm=1, q=0x560aadd834a0, c=0x560aad06e800) at ./block.c:1854
#7 bdrv_check_perm (bs=0x560aad105750, bs@entry=0x1, q=0x560aadd834a0, q@entry=0x90af93e91d77c000, cumulative_perms=1, cumulative_shared_perms=21,
    ignore_children=ignore_children@entry=0x560aad03e570, errp=0x7ffc6ff54090, errp@entry=0x15) at ./block.c:1671
#8 0x0000560aac2b9d05 in bdrv_check_update_perm (bs=0x1, q=0x90af93e91d77c000, q@entry=0x560aadd834a0, new_used_perm=new_used_perm@entry=1, new_shared_perm=new_shared_perm@entry=21,
    ignore_children=ignore_children@entry=0x560aad03e570, errp=0x15, errp@entry=0x7ffc6ff54090) at ./block.c:1841
#9 0x0000560aac2b9f54 in bdrv_child_check_perm (errp=0x7ffc6ff54090, ignore_children=0x560aad03e570, shared=<optimized out>, perm=1, q=0x560aadd834a0, c=0x560aad03e200) at ./block.c:1854
#10 bdrv_check_perm (bs=0x560aad0e53d0, q=q@entry=0x560aadd834a0, cumulative_perms=1, cumulative_shared_perms=21, ignore_children=ignore_children@entry=0x0, errp=errp@entry=0x7ffc6ff54090)
    at ./block.c:1671
#11 0x0000560aac2bb7ea in bdrv_reopen_prepare (reopen_state=reopen_state@entry=0x560aadd83418, queue=queue@entry=0x560aadd834a0, errp=errp@entry=0x7ffc6ff54090) at ./block.c:3111
#12 0x0000560aac2bb94f in bdrv_reopen_multiple (ctx=<optimized out>, bs_queue=0x560aadd834a0, errp=errp@entry=0x7ffc6ff540f0) at ./block.c:2887
#13 0x0000560aac2bbacf in bdrv_reopen (bs=bs@entry=0x560aad0e53d0, bdrv_flags=<optimized out>, errp=errp@entry=0x7ffc6ff541f0) at ./block.c:2928
#14 0x0000560aac306f3e in commit_active_start (job_id=job_id@entry=0x0, bs=bs@entry=0x560aadf47890, b...

Read more...

With the knowledge gathered above the potential commits seem to one of the following groups:
The first two sets look more likely, but I listed all that seemed somewhat related.

e0995dc3 block: Add reopen_queue to bdrv_child_perm()
3121fb45 block: Add reopen queue to bdrv_check_perm()
148eb13c block: Base permissions on rw state after reopen
1857c97b block: reopen: Queue children after their parents

30450259 block: Fix permissions after bdrv_reopen()

6858eba0 block: Introduce BdrvChildRole.update_filename
61f09cea commit: Support multiple roots above top node

bde70715 commit: Remove overlay_bs

dafe0960 block: Fix permissions in image activation

Hmm, my commit range must have been bad, those are all already in 2.11
... again ...

Yep silly me, it was v2.10.0..v2.11.0, would be better with v2.11.0..v2.12.0 :-)

New candidates:
5fbfabd3 block: Formats don't need CONSISTENT_READ with NO_IO

cc954f01 block: Open backing image in force share mode for size probe

0152bf40 block: Don't notify parents in drain call chain
d736f119 block: Allow graph changes in subtree drained section
1a63a907 block: Keep nodes drained between reopen_queue/multiple

There also are some commits around switching to ".bdrv_co_block_status" and ".bdrv_co_create" but I'd like to avoid those as it seems a bigger overhaul.

We do implicit image creation via qmp in the testcase here:
(QEMU) blockdev-snapshot-sync device=disk0 snapshot-file=tmp.qcow2 format=qcow2
Formatting 'tmp.qcow2', fmt=qcow2 size=10737418240 backing_file=./top-vm01.img backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
And there is a commit saying:
"This adds the .bdrv_co_create driver callback to file, which enables image creation over QMP."
But this seems to be an explicit create, the former implicit create might be ok and our issues are on the base file not the tmp.qcow2 anyway.

None of the above seem to be the perfect candidate, the one that looks somewhat closer to the issue is:
1a529736 block: Fix flags in reopen queue

Of these we already have on top of 2.11:
cc954f01 block: Open backing image in force share mode for size probe

I'm trying a build of Ubuntu's 2.11 with the smaller patches identified (no drain queue and no new ops used):
5fbfabd3 block: Formats don't need CONSISTENT_READ with NO_IO
1a529736 block: Fix flags in reopen queue

This is only a first stabbing in the dark via PPA [1] to be eventually more fine grained and selective.

[1]: https://launchpad.net/~paelzer/+archive/ubuntu/bug-1837869-locking-of-snapshots

Hrm, that didn't help.
Now I can't even start the guests anymore:
  root@b:~# qemu-system-x86_64: -drive file=./top-vm02.img,if=virtio,id=disk0: Failed to get "write" lock

It seems I too blindly picked out of context for this :-/.
But I also don't have a lot of time for this right now.
Maybe I should just start bisecting it in the background ...

Let me know if you have identified anything quicker than me.

Verified to be resolved with recent git build
Verified to trigger with v2.11.0 git build
  (some upstream 2.11 still needs patch [1] to build with glibc 2.27)

Started bisect script (and waiting for its errors :-) )

[1]: https://git.qemu.org/?p=qemu.git;a=commit;h=75e5b70e6b5dcc4f2219992d7cffa462aa406af0

Ok, my selection wasn't so bad after all.
Bisect found: 1a5297366fe0d11e28fce694fc4377b85afca1da block: Fix flags in reopen queue

=> https://github.com/EnterpriseDB/mysql_fdw/blob/master/mysql_fdw.c#L233

Since I had that already that just means my backports might need some more context to work properly.

hehe, you might have realized that link is from a different problem :-)
=> https://git.qemu.org/?p=qemu.git;a=commit;h=1a5297366fe0d11e28fce694fc4377b85afca1da

much better

1a5297366f block: Fix flags in reopen queue
changes bdrv_reopen_queue_child but
 $ git log v2.11.0..v2.12.0 -L :bdrv_reopen_queue_child:block.c
wasn't very helpful to identify more.

The further context would be rather huge list around drain changes.
1a5297366f block: Fix flags in reopen queue
[...]
1a63a90750 block: Keep nodes drained between reopen_queue/multiple
44487eb973 commit: Simplify reopen of base
d736f119da block: Allow graph changes in subtree drained section
b016558590 block: Add bdrv_subtree_drained_begin/end()
0152bf400f block: Don't notify parents in drain call chain
0f11516894 block: Nested drain_end must still call callbacks
8119334918 block: Don't block_job_pause_all() in bdrv_drain_all()
 7253220de4 test-bdrv-drain: Test drain vs. block jobs
 89a6ceab46 test-bdrv-drain: Test bs->quiesce_counter
 86e1c840ec test-bdrv-drain: Test callback for bdrv_drain
 881cfd17c7 test-bdrv-drain: Test BlockDriver callbacks for drain
7b6a3d3553 block: Make bdrv_drain() driver callbacks non-recursive
9a7e86c804 block: Assert drain_all is only called from main AioContext
8e77e0bceb block: Remove unused bdrv_requests_pending
#cc954f01e3 block: Open backing image in force share mode for size probe
546a7dc40e qcow2: get rid of qcow2_backing_read1 routine
60369b86c4 block: Unify order in drain functions
5280aa32e1 block: Don't wait for requests in bdrv_drain*_end()
99c05de918 block: bdrv_drain_recurse(): Remove unused begin parameter
2da9b7d456 block: Call .drain_begin only once in bdrv_drain_all_begin()
#db0289b9b2 block: Make bdrv_drain_invoke() recursive
[...]
bd6458e410 block: avoid recursive AioContext acquire in bdrv_inactivate_all()

I hope it would not also be the new function callbacks.

Of that context we already have from 2.11-stable branch:
db0289b9b2 block: Make bdrv_drain_invoke() recursive
cc954f01e3 block: Open backing image in force share mode for size probe

We might brute force the list of changes above to check if it would even help and then try to minimize it to something SRUable later.

A second PPA version with the above started to build now in the same PPA.

Ok this set of patches would be enough to fix it.

@Louis / Clement - would you have some time to help to break these into smaller chunks trying to identify how much smaller we could make the patch set?

FYI - you can use [1] to do so more easily.
This contains all 20 patches backported.

https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+ref/bug-1837869-snapshot-base-perm-bionic-long-patch-list

Hi,
Sorry, I'm off on vacation until August 12th. Will look at this once I'm
back

...Louis

Le mer. 31 juil. 2019 à 09:05, Christian Ehrhardt  <
<email address hidden>> a écrit :

> Hrm, that didn't help.
> Now I can't even start the guests anymore:
> root@b:~# qemu-system-x86_64: -drive
> file=./top-vm02.img,if=virtio,id=disk0: Failed to get "write" lock
>
> It seems I too blindly picked out of context for this :-/.
> But I also don't have a lot of time for this right now.
> Maybe I should just start bisecting it in the background ...
>
> Let me know if you have identified anything quicker than me.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1837869
>
> Title:
> Cannot complete snapshot if read-only backing store is opened by
> another VM
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1837869/+subscriptions
>

Since I needed it for the bisect this is a non interactive one-shot repro script based on what was reported - might be useful for others as well.

I have created a second PPA [2], trying once again the minimal patch.
Not sure what was going wrong the first time trying a minimized version, but this seems to work well.

@Luis or Clement - can you give things a try with the PPA?

PPA: https://launchpad.net/~paelzer/+archive/ubuntu/bug-1837869-locking-of-snapshots-v2
Code: https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+ref/bug-1837869-snapshot-base-perm-bionic-justone

It seems to work fine for the testcase that we had.
Give it a deeper check and if ok I'll throw some regression tests of mine against it.

Clément LAFORET (sheepkiller) wrote :

Hi Christian,

thank you very much. I'll give a try tomorrow.

Clément LAFORET (sheepkiller) wrote :

Hi Christian,

sorry but we still got the issue with the test provided by Louis.

qmp_shell/> blockdev-snapshot-sync device=disk0 snapshot-file=tmp.qcow2 format=qcow2
{}
qmp_shell/> block-commit device=disk0 base=top-vm01.img
GenericError: Failed to get "write" lock
qmp_shell/>

Hi Clement,
is that still occurring error with the PPA [1] for sure?
Please be aware that the version in the main archive surpassed that one in the meantime.
So you'll have to explicitly install the qemu* from the PPA to have the one you want to test.

[1]: https://launchpad.net/~paelzer/+archive/ubuntu/bug-1837869-locking-of-snapshots-v2

Clément LAFORET (sheepkiller) wrote :

Hi Christian,

here's the very very fisrt command we launched at the first boot of the server

    1 apt-get install -y software-properties-common
    2 add-apt-repository ppa:paelzer/bug-1837869-locking-of-snapshots-v2
    3 apt-get update -y
    4 apt-get install qemu=1:2.11+dfsg-1ubuntu7.17~ppa1

Then I don't know, all that I could reproduce and fix are in that PPA.
Can you debug and suggest on your own what might be different?

Changed in qemu (Ubuntu):
status: New → Fix Released
Changed in qemu (Ubuntu Bionic):
status: Triaged → Incomplete
Louis Bouchard (louis) wrote :

As discussed offline, a new round of test confirms that the fix in your PPA fixes the File Lock issue.

Changed in qemu (Ubuntu Bionic):
status: Incomplete → In Progress
description: updated

Thanks for rechecking, uploaded to bionic unapproved

Hello Louis, or anyone else affected,

Accepted qemu into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/1:2.11+dfsg-1ubuntu7.19 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in qemu (Ubuntu Bionic):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-bionic

All autopkgtests for the newly accepted qemu (1:2.11+dfsg-1ubuntu7.19) for bionic have finished running.
The following regressions have been reported in tests triggered by the package:

cloud-utils/unknown (armhf)
vagrant-mutate/1.2.0-3 (armhf)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/bionic/update_excuses.html#qemu

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Download full text (3.2 KiB)

FYI autopkgtest issues all resolved by now.

Started on 1:2.11+dfsg-1ubuntu7.18

...
+ echo 'blockdev-snapshot-sync device=disk0 snapshot-file=tmp.qcow2 format=qcow2'
+ /root/qemu/scripts/qmp/qmp-shell ./qmp-1.sock
Welcome to the QMP low-level shell!
Connected to QEMU 2.11.1

Formatting 'tmp.qcow2', fmt=qcow2 size=10737418240 backing_file=./top-vm01.img backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
(QEMU) {"return": {}}
(QEMU)
+ echo 'block-commit device=disk0 base=top-vm01.img'
+ /root/qemu/scripts/qmp/qmp-shell ./qmp-1.sock
Welcome to the QMP low-level shell!
Connected to QEMU 2.11.1
(QEMU) {"error": {"class": "GenericError", "desc": "Failed to get \"write\" lock"}}

Upgraded to proposed and then it works as expected:
root@b:~# ./fulltest
+ killall qemu-system-x86_64
qemu-system-x86_64: terminating on signal 15 from pid 6666 ()
qemu-system-x86_64: terminating on signal 15 from pid 6666 ()
+ sleep 1
+ rm base.qcow2 tmp.qcow2
+ qemu-img create -f qcow2 base.qcow2 10G
Formatting 'base.qcow2', fmt=qcow2 size=10737418240 cluster_size=65536 lazy_refcounts=off refcount_bits=16
+ qemu-img create -f qcow2 -b base.qcow2 middle-vm01.img 10G
Formatting 'middle-vm01.img', fmt=qcow2 size=10737418240 backing_file=base.qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
+ qemu-img create -f qcow2 -b base.qcow2 middle-vm02.img 10G
Formatting 'middle-vm02.img', fmt=qcow2 size=10737418240 backing_file=base.qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
+ qemu-img create -f qcow2 -b middle-vm01.img top-vm01.img 10G
Formatting 'top-vm01.img', fmt=qcow2 size=10737418240 backing_file=middle-vm01.img cluster_size=65536 lazy_refcounts=off refcount_bits=16
+ qemu-img create -f qcow2 -b middle-vm01.img top-vm02.img 10G
Formatting 'top-vm02.img', fmt=qcow2 size=10737418240 backing_file=middle-vm01.img cluster_size=65536 lazy_refcounts=off refcount_bits=16
+ qemu-system-x86_64 -nographic -qmp unix:./qmp-1.sock,server,nowait -enable-kvm -device virtio-scsi-pci,id=scsi -device sga -nodefaults -monitor none -m 256M -drive file=./top-vm01.img,if=virtio,id=disk0 -smp 1 -smbios type=1,manufacturer=test
+ sleep 2s
+ qemu-system-x86_64 -nographic -qmp unix:./qmp-2.sock,server,nowait -enable-kvm -device virtio-scsi-pci,id=scsi -device sga -nodefaults -monitor none -m 256M -drive file=./top-vm02.img,if=virtio,id=disk0 -smp 1 -smbios type=1,manufacturer=test
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
+ echo 'blockdev-snapshot-sync device=disk0 snapshot-file=tmp.qcow2 format=qcow2'
+ /root/qemu/scripts/qmp/qmp-shell ./qmp-1.sock
Welcome to the QMP low-level shell!
Connected to QEMU 2.11.1

Formatting 'tmp.qcow2', fmt=qcow2 size=10737418240 backing_file=./top-vm01.img backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
(QEMU) {"return": {}}
(QEMU)
+ echo 'block-commit device=disk0 base=top-vm01.img'
+ /root/qemu/scripts/qmp/qmp-shell ./qmp-1.sock
Welcome to the QMP low-level shell!
Connected to QEMU 2.11.1

(QEMU) {"return": {}}
(QEMU)

Version use...

Read more...

tags: added: verification-done verification-done-bionic
removed: verification-needed verification-needed-bionic
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package qemu - 1:2.11+dfsg-1ubuntu7.19

---------------
qemu (1:2.11+dfsg-1ubuntu7.19) bionic; urgency=medium

  * d/p/ubuntu/lp-1837869-block-Fix-flags-in-reopen-queue.patch: avoid
    issues on block reopen (LP: #1837869)

 -- Christian Ehrhardt <email address hidden> Wed, 18 Sep 2019 08:29:32 +0200

Changed in qemu (Ubuntu Bionic):
status: Fix Committed → Fix Released

The verification of the Stable Release Update for qemu has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.