Activity log for bug #1847361

Date Who What changed Old value New value Message
2019-10-08 22:12:40 Billy Olsen bug added bug
2019-10-09 04:43:01 Christian Ehrhardt  tags bootstack sts bootstack server-triage-discuss sts
2019-10-09 05:40:02 Christian Ehrhardt  qemu (Ubuntu): status New Confirmed
2019-10-09 05:40:09 Christian Ehrhardt  bug added subscriber Ubuntu Server
2019-10-09 05:40:11 Christian Ehrhardt  bug added subscriber Christian Ehrhardt 
2019-10-09 06:10:25 Christian Ehrhardt  qemu (Ubuntu): importance Undecided Wishlist
2019-10-09 06:10:27 Christian Ehrhardt  qemu (Ubuntu): status Confirmed Triaged
2019-10-09 06:15:08 Christian Ehrhardt  description Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso"> <host name="archive.ubuntu.com" port="80"/> </source> <target dev='vdc' bus='virtio'/> <readonly/> </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need sort of a -$buildid in the name. That could be the version string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid when creating module names. util/module.c in module_load_one / module_load_file It already searches in multiple dirs, maybe it could insert the $buildid there - We'd need a way of detecting running versions of qemu binaries and only make them uninstallable once the binaries are all gone. I have not seen something like that in apt yet (kernel is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be liked upstream) and not desirable for many setups using qemu (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would allow an admin to decide doing so for the few setups that benefit. - that could down the road then even get a libvirt interface for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades - prohibited by SR-IOV usage - Tech to get SR-IOV migratable is coming (e.g. via net_failover, bonding in DPDK, ...) - load the modules you need in advance - Note: lacking an explicit "load module" command makes this slightly odd for now - but using iscsi or ceph is never spontaneous, a deployment has or hasn't the setup to use those - Create a single small read-only node and attach this to each guest, that will load the driver and render you immune to the issue. While more clunky, this isn't so much different than how it would be with an explicit "load module" command.
2019-10-09 11:37:32 Dan Streetman bug added subscriber Dan Streetman
2019-10-24 15:30:40 Tori Hegarty bug added subscriber Tori Hegarty
2019-11-07 16:18:46 Christian Ehrhardt  tags bootstack server-triage-discuss sts bootstack sts
2019-11-16 10:03:14 Louis Bouchard bug added subscriber Louis Bouchard
2019-11-18 06:03:41 Christian Ehrhardt  description --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso"> <host name="archive.ubuntu.com" port="80"/> </source> <target dev='vdc' bus='virtio'/> <readonly/> </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need sort of a -$buildid in the name. That could be the version string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid when creating module names. util/module.c in module_load_one / module_load_file It already searches in multiple dirs, maybe it could insert the $buildid there - We'd need a way of detecting running versions of qemu binaries and only make them uninstallable once the binaries are all gone. I have not seen something like that in apt yet (kernel is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be liked upstream) and not desirable for many setups using qemu (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would allow an admin to decide doing so for the few setups that benefit. - that could down the road then even get a libvirt interface for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades - prohibited by SR-IOV usage - Tech to get SR-IOV migratable is coming (e.g. via net_failover, bonding in DPDK, ...) - load the modules you need in advance - Note: lacking an explicit "load module" command makes this slightly odd for now - but using iscsi or ceph is never spontaneous, a deployment has or hasn't the setup to use those - Create a single small read-only node and attach this to each guest, that will load the driver and render you immune to the issue. While more clunky, this isn't so much different than how it would be with an explicit "load module" command. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command. Actually the target doesn't have to exist it can fail to attach and still achieves what is needed comment #17 has an example.
2020-03-02 11:51:54 Christian Ehrhardt  tags bootstack sts bootstack server-next sts
2020-03-03 12:22:23 Christian Ehrhardt  bug task added libvirt (Ubuntu)
2020-03-03 12:22:28 Christian Ehrhardt  libvirt (Ubuntu): status New Triaged
2020-03-03 12:22:30 Christian Ehrhardt  libvirt (Ubuntu): importance Undecided Wishlist
2020-03-03 13:12:55 Christian Ehrhardt  qemu (Ubuntu): importance Wishlist Low
2020-03-03 13:13:00 Christian Ehrhardt  qemu (Ubuntu): importance Low Wishlist
2020-03-03 13:13:01 Christian Ehrhardt  libvirt (Ubuntu): importance Wishlist Low
2020-03-03 16:22:45 Christian Ehrhardt  qemu (Ubuntu): status Triaged In Progress
2020-03-10 07:50:18 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/380467
2020-03-10 07:59:57 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/libvirt/+git/libvirt/+merge/380469
2020-03-10 08:11:13 Christian Ehrhardt  description --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command. Actually the target doesn't have to exist it can fail to attach and still achieves what is needed comment #17 has an example. [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they represent a guest VM - later trying to add features e.g. ceph disk hot-add will need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are gone after upgrade Solution: - If qemu fails to load from its usual paths it will now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler and error-proof than trying to detect which versions binaries are running - libvirt has a change to allow just reading and mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example.
2020-03-10 08:11:21 Christian Ehrhardt  bug added subscriber Ubuntu Release Team
2020-03-13 11:19:31 Christian Ehrhardt  qemu (Ubuntu): status In Progress Fix Committed
2020-03-13 17:48:22 Launchpad Janitor libvirt (Ubuntu): status Triaged Fix Released
2020-03-13 23:55:46 Launchpad Janitor qemu (Ubuntu): status Fix Committed Fix Released
2020-03-19 09:08:14 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/380874
2020-03-23 11:41:04 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/381033
2020-03-23 11:45:48 Christian Ehrhardt  merge proposal unlinked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/381033
2020-03-24 09:28:19 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/381033
2020-03-25 05:58:36 Launchpad Janitor merge proposal unlinked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/381033
2020-04-06 15:31:14 Christian Ehrhardt  nominated for series Ubuntu Bionic
2020-04-06 15:31:14 Christian Ehrhardt  bug task added qemu (Ubuntu Bionic)
2020-04-06 15:31:14 Christian Ehrhardt  bug task added libvirt (Ubuntu Bionic)
2020-04-06 15:31:23 Christian Ehrhardt  qemu (Ubuntu Bionic): assignee Christian Ehrhardt  (paelzer)
2020-04-09 06:26:01 Christian Ehrhardt  nominated for series Ubuntu Eoan
2020-04-09 06:26:01 Christian Ehrhardt  bug task added qemu (Ubuntu Eoan)
2020-04-09 06:26:01 Christian Ehrhardt  bug task added libvirt (Ubuntu Eoan)
2020-04-09 07:29:54 Christian Ehrhardt  qemu (Ubuntu Bionic): status New Triaged
2020-04-09 07:29:56 Christian Ehrhardt  qemu (Ubuntu Eoan): status New Triaged
2020-04-09 07:29:58 Christian Ehrhardt  libvirt (Ubuntu Eoan): status New Triaged
2020-04-09 07:30:00 Christian Ehrhardt  libvirt (Ubuntu Bionic): status New Triaged
2020-04-09 07:30:02 Christian Ehrhardt  qemu (Ubuntu Eoan): assignee Christian Ehrhardt  (paelzer)
2020-04-09 07:30:05 Christian Ehrhardt  libvirt (Ubuntu Eoan): assignee Christian Ehrhardt  (paelzer)
2020-04-09 07:30:07 Christian Ehrhardt  libvirt (Ubuntu Bionic): assignee Christian Ehrhardt  (paelzer)
2020-04-09 11:32:49 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/libvirt/+git/libvirt/+merge/381997
2020-04-09 11:32:57 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/381998
2020-04-09 11:33:02 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/libvirt/+git/libvirt/+merge/381999
2020-04-09 11:33:08 Launchpad Janitor merge proposal linked https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/382000
2020-04-16 10:35:52 Christian Ehrhardt  description [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they represent a guest VM - later trying to add features e.g. ceph disk hot-add will need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are gone after upgrade Solution: - If qemu fails to load from its usual paths it will now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler and error-proof than trying to detect which versions binaries are running - libvirt has a change to allow just reading and mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example. for SRU template: QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -cdrom localhost::/foo --- [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they   represent a guest VM - later trying to add features e.g. ceph disk hot-add will   need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are   gone after upgrade Solution: - If qemu fails to load from its usual paths it will   now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler   and error-proof than trying to detect which versions   binaries are running - libvirt has a change to allow just reading and   mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example.
2020-04-16 11:42:20 Christian Ehrhardt  description for SRU template: QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -cdrom localhost::/foo --- [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they   represent a guest VM - later trying to add features e.g. ceph disk hot-add will   need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are   gone after upgrade Solution: - If qemu fails to load from its usual paths it will   now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler   and error-proof than trying to detect which versions   binaries are running - libvirt has a change to allow just reading and   mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- testcase --- $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so Note: only modules from the same build can be loaded. --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example. [Impact] * An infrequent but annoying issue is qemus problem to not be able to hot-add capabilities IF since starting the instance qemu has been upgraded. This is due to qemu modules only working with exactly the same build. * We brought changes upstream that allow the packaging to keep the old files around and make qemu look after them as a fallback. [Test Case] I: * $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version # Instead if you prefer (easier) you can run $ apt install --reinstall qemu-* Next check if they appeared (action of the maintainer scripts) in the /var/run/qemu/<version> directory # And then rm/mv the original .so files of qemu-block-extra # Trying to load a .so now would after an upgrade fail as the old qemu can't load the build id $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so One can also check files mapped into a process and we should see the /var/run/.. path being used now. II: * As it had issues in the first iteration of the fix worth a try is also the use of an environment var for an extra path: $ QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -cdrom localhost::/foo [Regression Potential] I: * libvirt just allows a few more paths to be read from in the apparmor isolation that is usually safe unless these paths are considered sensitive. But /var/run/qemu is new, /var/run in general not meant for permanent or secure data and as always if people want to ramp up isolation they can always add deny rules to the local overrides. II: * the qemu change has two components. In qemu code it looks for another path if the former ones failed. I see no issues there yet, but can imagine that odd versions might make it access odd paths which would then be denied by apparmor or just don't exist. But that is no different than the former built-in paths it tries, so nothing bad should happen. The code change to the maintainer scripts has to backup the files. If that goes wrong upgrades could be broken, but so far no tests have shown issues. [Other Info] * To really use the functionality users will need the new qemu AND the new libvirt that are uploaded for this bug. But it felt wrong to add versioned dependencies from qemu->libvirt (that is the semantically correct direction) also conflicts/breaks might cause issues in many places that want to control these. OTOH while the fix is great for some installations the majority of users won't care and therefore be happy if extra dependencies are not causing any oddity on apt upgrade. Therefore no versioned dependencies were added intentionally. --- [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they   represent a guest VM - later trying to add features e.g. ceph disk hot-add will   need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are   gone after upgrade Solution: - If qemu fails to load from its usual paths it will   now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler   and error-proof than trying to detect which versions   binaries are running - libvirt has a change to allow just reading and   mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example.
2020-04-16 11:42:31 Christian Ehrhardt  description [Impact] * An infrequent but annoying issue is qemus problem to not be able to hot-add capabilities IF since starting the instance qemu has been upgraded. This is due to qemu modules only working with exactly the same build. * We brought changes upstream that allow the packaging to keep the old files around and make qemu look after them as a fallback. [Test Case] I: * $ apt install uvtool-libvirt $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version # Instead if you prefer (easier) you can run $ apt install --reinstall qemu-* Next check if they appeared (action of the maintainer scripts) in the /var/run/qemu/<version> directory # And then rm/mv the original .so files of qemu-block-extra # Trying to load a .so now would after an upgrade fail as the old qemu can't load the build id $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so One can also check files mapped into a process and we should see the /var/run/.. path being used now. II: * As it had issues in the first iteration of the fix worth a try is also the use of an environment var for an extra path: $ QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -cdrom localhost::/foo [Regression Potential] I: * libvirt just allows a few more paths to be read from in the apparmor isolation that is usually safe unless these paths are considered sensitive. But /var/run/qemu is new, /var/run in general not meant for permanent or secure data and as always if people want to ramp up isolation they can always add deny rules to the local overrides. II: * the qemu change has two components. In qemu code it looks for another path if the former ones failed. I see no issues there yet, but can imagine that odd versions might make it access odd paths which would then be denied by apparmor or just don't exist. But that is no different than the former built-in paths it tries, so nothing bad should happen. The code change to the maintainer scripts has to backup the files. If that goes wrong upgrades could be broken, but so far no tests have shown issues. [Other Info] * To really use the functionality users will need the new qemu AND the new libvirt that are uploaded for this bug. But it felt wrong to add versioned dependencies from qemu->libvirt (that is the semantically correct direction) also conflicts/breaks might cause issues in many places that want to control these. OTOH while the fix is great for some installations the majority of users won't care and therefore be happy if extra dependencies are not causing any oddity on apt upgrade. Therefore no versioned dependencies were added intentionally. --- [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they   represent a guest VM - later trying to add features e.g. ceph disk hot-add will   need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are   gone after upgrade Solution: - If qemu fails to load from its usual paths it will   now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler   and error-proof than trying to detect which versions   binaries are running - libvirt has a change to allow just reading and   mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example. [Impact]  * An infrequent but annoying issue is QEMUs problem to not be able to    hot-add capabilities IF since starting the instance qemu has been    upgraded. This is due to qemu modules only working with exactly the    same build.  * We brought changes upstream that allow the packaging to keep the old    files around and make qemu look after them as a fallback. [Test Case]  I:  * $ apt install uvtool-libvirt    $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic    $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily cat > curldisk.xml << EOF   <disk type='network' device='disk'>     <driver name='qemu' type='raw'/>     <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">             <host name="archive.ubuntu.com" port="80"/>     </source>     <target dev='vdc' bus='virtio'/>     <readonly/>   </disk> EOF # Here up or downgrade the installed packages, even a minor # version or a rebuild of the same version # Instead if you prefer (easier) you can run   $ apt install --reinstall qemu-* Next check if they appeared (action of the maintainer scripts) in the /var/run/qemu/<version> directory # And then rm/mv the original .so files of qemu-block-extra # Trying to load a .so now would after an upgrade fail as the old qemu can't load the build id $ virsh attach-device lateload curldisk.xml Reported issue happens on attach: root@b:~# virsh attach-device lateload cdrom-curl.xml error: Failed to attach device from cdrom-curl.xml error: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2' In the log we can see: Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so One can also check files mapped into a process and we should see the /var/run/.. path being used now.  II:  * As it had issues in the first iteration of the fix worth a    try is also the use of an environment var for an extra path:    $ QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -cdrom localhost::/foo [Regression Potential]  I:  * libvirt just allows a few more paths to be read from in the apparmor    isolation that is usually safe unless these paths are considered    sensitive. But /var/run/qemu is new, /var/run in general not meant    for permanent or secure data and as always if people want to ramp up    isolation they can always add deny rules to the local overrides.  II:  * the qemu change has two components.    In qemu code it looks for another path if the former ones failed.    I see no issues there yet, but can imagine that odd versions might    make it access odd paths which would then be denied by apparmor or    just don't exist. But that is no different than the former built-in    paths it tries, so nothing bad should happen.    The code change to the maintainer scripts has to backup the files.    If that goes wrong upgrades could be broken, but so far no tests have    shown issues. [Other Info]  * To really use the functionality users will need the new qemu AND the    new libvirt that are uploaded for this bug.    But it felt wrong to add versioned dependencies from qemu->libvirt    (that is the semantically correct direction) also conflicts/breaks    might cause issues in many places that want to control these. OTOH    while the fix is great for some installations the majority of users    won't care and therefore be happy if extra dependencies are not    causing any oddity on apt upgrade. Therefore no versioned    dependencies were added intentionally. --- [Feature Freeze Exception] Hi, this is IMHO a just a bugfix. But since it involves some bigger changes I wanted to be on the safe side and get an ack by the release Team. Problem: - on upgrade qemu processes are left running as they   represent a guest VM - later trying to add features e.g. ceph disk hot-add will   need to load .so files e.g. from qemu-block-extra package - those modules can on ly be loaded from the same build, but those are   gone after upgrade Solution: - If qemu fails to load from its usual paths it will   now also look in /var/run/<version/ - package upgrade code will place the .so's there - things will be cleaned on reboot which is much simpler   and error-proof than trying to detect which versions   binaries are running - libvirt has a change to allow just reading and   mapping from that path (apparmor) @Release team it would be great if you would agree to this being safe for an FFe. --- initial report --- Upgrading qemu binaries causes the on-disk versions to change, but the in-memory running instances still attempt to dynamically load a library which matches its same version. This can cause running instances to fail actions like hotplugging devices. This can be alleviated by migrating the instance to a new host or restarting the instance, however in cloud type environments there may be instances that cannot be migrated (sriov, etc) or the cloud operator does not have permission to reboot. This may be resolvable for many situations by changing the packaging to keep older versions of qemu libraries around on disk (similar to how the kernel package keeps older kernel versions around). --- solution options (WIP) --- For a packaging solution we would need: - qemu-block-extra / qemu-system-gui binary packages would need   sort of a -$buildid in the name. That could be the version   string (sanitized for package name) - /usr/lib/x86_64-linux-gnu/qemu/*.so would need a -$buildid - loading of modules from qemu would need to consider $buildid   when creating module names.   util/module.c in module_load_one / module_load_file   It already searches in multiple dirs, maybe it could insert   the $buildid there - We'd need a way of detecting running versions of qemu binaries   and only make them uninstallable once the binaries are all   gone. I have not seen something like that in apt yet (kernel   is easy in comparison as only one can be loaded at a time). ALTERNATIVES: - disable loadable module support - add an option to load all modules in advance (unlikely to be   liked upstream) and not desirable for many setups using qemu   (especially not as default) - add an option to load a module (e.g via QMP/HMP) which would   allow an admin   to decide doing so for the few setups that benefit.   - that could down the road then even get a libvirt interface     for easier consumption Heads up - None of the above would be SRUable --- mitigation options --- - live migrate for upgrades   - prohibited by SR-IOV usage   - Tech to get SR-IOV migratable is coming (e.g. via net_failover,     bonding in DPDK, ...) - load the modules you need in advance   - Note: lacking an explicit "load module" command makes this     slightly odd for now   - but using iscsi or ceph is never spontaneous, a deployment     has or hasn't the setup to use those   - Create a single small read-only node and attach this to each guest,     that will load the driver and render you immune to the issue. While     more clunky, this isn't so much different than how it would be     with an explicit "load module" command.     Actually the target doesn't have to exist it can fail to attach     and still achieves what is needed comment #17 has an example.
2020-05-05 13:51:33 Łukasz Zemczak qemu (Ubuntu Eoan): status Triaged Fix Committed
2020-05-05 13:51:35 Łukasz Zemczak bug added subscriber Ubuntu Stable Release Updates Team
2020-05-05 13:51:38 Łukasz Zemczak bug added subscriber SRU Verification
2020-05-05 13:51:43 Łukasz Zemczak tags bootstack server-next sts bootstack server-next sts verification-needed verification-needed-eoan
2020-05-05 13:53:47 Łukasz Zemczak libvirt (Ubuntu Eoan): status Triaged Fix Committed
2020-05-05 14:09:39 Łukasz Zemczak qemu (Ubuntu Bionic): status Triaged Fix Committed
2020-05-05 14:09:46 Łukasz Zemczak tags bootstack server-next sts verification-needed verification-needed-eoan bootstack server-next sts verification-needed verification-needed-bionic verification-needed-eoan
2020-05-05 14:13:46 Łukasz Zemczak libvirt (Ubuntu Bionic): status Triaged Fix Committed
2020-05-06 11:21:25 Christian Ehrhardt  attachment added intermediate gcc file - bad case https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367307/+files/file-posix.i.bad
2020-05-06 11:29:20 Christian Ehrhardt  attachment added intermediate gcc file - good case https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367309/+files/file-posix.i.good
2020-05-06 11:40:40 Christian Ehrhardt  attachment added intermediate gcc file - good case (improved) https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367317/+files/file-posix.i.good
2020-05-06 11:47:25 Christian Ehrhardt  attachment removed intermediate gcc file - good case https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367309/+files/file-posix.i.good
2020-05-06 11:47:54 Christian Ehrhardt  attachment added intermediate gcc file - good case (improved) https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367319/+files/file-posix.i.good
2020-05-06 11:48:38 Christian Ehrhardt  attachment removed intermediate gcc file - good case (improved) https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367317/+files/file-posix.i.good
2020-05-06 11:48:50 Christian Ehrhardt  attachment removed intermediate gcc file - bad case https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367307/+files/file-posix.i.bad
2020-05-06 11:49:08 Christian Ehrhardt  attachment added intermediate gcc file - bad case (improved) https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5367320/+files/file-posix.i.bad
2020-05-14 07:13:47 Christian Ehrhardt  tags bootstack server-next sts verification-needed verification-needed-bionic verification-needed-eoan bootstack server-next sts verification-done-eoan verification-needed verification-needed-bionic
2020-05-14 08:17:30 Launchpad Janitor libvirt (Ubuntu Eoan): status Fix Committed Fix Released
2020-05-14 08:17:34 Launchpad Janitor qemu (Ubuntu Eoan): status Fix Committed Fix Released
2020-05-14 08:17:40 Łukasz Zemczak removed subscriber Ubuntu Stable Release Updates Team
2020-05-14 11:01:26 Łukasz Zemczak bug added subscriber Ubuntu Stable Release Updates Team
2020-05-14 14:03:44 Christian Ehrhardt  tags bootstack server-next sts verification-done-eoan verification-needed verification-needed-bionic bootstack server-next sts verification-done verification-done-bionic verification-done-eoan
2020-05-19 23:19:53 Launchpad Janitor libvirt (Ubuntu Bionic): status Fix Committed Fix Released
2020-05-21 08:25:52 Launchpad Janitor qemu (Ubuntu Bionic): status Fix Committed Fix Released
2020-05-25 13:22:19 Launchpad Janitor merge proposal linked https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/qemu/+git/qemu/+merge/383566
2020-05-26 17:42:37 Launchpad Janitor merge proposal unlinked https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/qemu/+git/qemu/+merge/383566
2020-09-16 12:35:09 Ante Karamatić bug task added cloud-archive
2020-09-16 12:54:45 Chris MacNaughton cloud-archive: status New Incomplete
2020-09-16 13:10:08 Chris MacNaughton cloud-archive: status Incomplete Confirmed
2020-09-16 13:26:27 Corey Bryant nominated for series cloud-archive/stein
2020-09-16 13:26:27 Corey Bryant bug task added cloud-archive/stein
2020-09-16 13:26:38 Corey Bryant cloud-archive/stein: status New Triaged
2020-09-16 13:30:10 Corey Bryant cloud-archive/stein: importance Undecided Medium
2020-09-16 13:30:55 Corey Bryant cloud-archive: status Confirmed Fix Released
2020-09-16 14:34:56 Victor Tapia bug added subscriber Victor Tapia
2020-09-24 08:51:02 Dominique Poulain bug added subscriber Dominique Poulain
2020-09-29 15:20:21 Victor Tapia attachment added qemu-stein.debdiff https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5415315/+files/qemu-stein.debdiff
2020-09-29 15:20:46 Victor Tapia attachment added libvirt-stein.debdiff https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5415316/+files/libvirt-stein.debdiff
2020-10-07 09:33:58 Victor Tapia attachment removed qemu-stein.debdiff https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5415315/+files/qemu-stein.debdiff
2020-10-07 09:34:20 Victor Tapia attachment added qemu-stein.debdiff https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5418856/+files/qemu-stein.debdiff
2020-10-20 17:39:32 Corey Bryant cloud-archive/stein: status Triaged Fix Committed
2020-10-20 17:39:34 Corey Bryant tags bootstack server-next sts verification-done verification-done-bionic verification-done-eoan bootstack server-next sts verification-done verification-done-bionic verification-done-eoan verification-stein-needed
2020-10-27 15:24:58 Victor Tapia tags bootstack server-next sts verification-done verification-done-bionic verification-done-eoan verification-stein-needed bootstack server-next sts verification-done verification-done-bionic verification-done-eoan verification-stein-done
2020-10-28 15:26:30 Corey Bryant cloud-archive/stein: status Fix Committed Fix Released