If a VM is in paused, and it live-migrated twice, it is lost.

Bug #1947725 reported by Olaf Seibert
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Incomplete
Undecided
Unassigned

Bug Description

While working on a patch for https://bugs.launchpad.net/nova/+bug/1946752, I ran into the following bug.

Description
===========

If a VM is in paused state, and it live-migrated twice, it is lost.

Steps to reproduce
==================

$ openstack server pause <UUID>
(wait until done)
$ openstack server migrate --live-migration <UUID>
(wait until done)
$ openstack server migrate --live-migration <UUID>

Expected result
===============

Migration succeeds and VM is usable afterwards.

Actual result
=============

$ openstack server list
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
| de2b27d2-345c-45fc-8f37-2fa0ed1a1151 | large2-kickstart | MIGRATING | large2-kickstart-net=10.0.0.25, 185.46.136.254 | Ubuntu Focal 20.04 (2021-09-23) | m1.large |
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
$ openstack server list
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
| de2b27d2-345c-45fc-8f37-2fa0ed1a1151 | large2-kickstart | ERROR | large2-kickstart-net=10.0.0.25, 185.46.136.254 | Ubuntu Focal 20.04 (2021-09-23) | m1.large |
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+

The VM is now in ERROR state because it has disappeared: libvirt.libvirtError: Domain not found: no domain with matching uuid 'de2b27d2-345c-45fc-8f37-2fa0ed1a1151'

from nova-compute.log on the target host:

2021-10-18 14:32:34.166 32686 INFO nova.compute.manager [req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1 1142d4b9561746bd9e279c43803f50ed - default default] [instance: de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Post operation of migration started
2021-10-18 14:32:35.913 32686 ERROR nova.compute.manager [req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1 1142d4b9561746bd9e279c43803f50ed - default default] [instance: de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Unexpected error during post live migration at destination host.: nova.exception.InstanceNotFound: Instance de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server [req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1 1142d4b9561746bd9e279c43803f50ed - default default] Exception during message handling: nova.exception.InstanceNotFound: Instance de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 605, in _get_domain
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return conn.lookupByUUIDString(instance.uuid)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server result = proxy_call(self._autowrap, f, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server rv = execute(f, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server six.reraise(c, e, tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server rv = meth(*args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/libvirt.py", line 4508, in lookupByUUIDString
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server if ret is None:raise libvirtError('virDomainLookupByUUIDString() failed', conn=self)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server libvirt.libvirtError: Domain not found: no domain with matching uuid 'de2b27d2-345c-45fc-8f37-2fa0ed1a1151'
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in dispatch
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 196, in _do_dispatch
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 79, in wrapped
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server self.force_reraise()
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 69, in wrapped
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1456, in decorated_function
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 206, in decorated_function
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 8752, in post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server 'destination host.', instance=instance)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server self.force_reraise()
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 8747, in post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server block_device_info)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 9941, in post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server self._reattach_instance_vifs(context, instance, network_info)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 9541, in _reattach_instance_vifs
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server guest = self._host.get_guest(instance)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 589, in get_guest
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return libvirt_guest.Guest(self._get_domain(instance))
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 609, in _get_domain
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise exception.InstanceNotFound(instance_id=instance.uuid)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server nova.exception.InstanceNotFound: Instance de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:47.754 32686 INFO nova.compute.manager [-] [instance: de2b27d2-345c-45fc-8f37-2fa0ed1a1151] VM Stopped (Lifecycle Event)
2021-10-18 14:32:49.596 32686 ERROR nova.scheduler.client.report [req-6ee0e137-28ab-40a7-bf29-2da1260ca7b1 - - - - -] [req-bf431b52-6667-4d1b-8f7a-48e19211fbb3] Failed to update inventory to [{'MEMORY_MB': {'total': 128586, 'min_unit': 1, 'max_unit': 128586, 'step_size': 1, 'allocation_ratio': 1.5, 'reserved': 0}, 'VCPU': {'total': 40, 'min_unit': 1, 'max_unit': 40, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 13124, 'min_unit': 1, 'max_unit': 13124, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 0}}] for resource provider with UUID 7386bced-89d0-452a-8cbb-cf2187a3dbee. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-bf431b52-6667-4d1b-8f7a-48e19211fbb3"}]}
2021-10-18 14:38:00.913 32686 WARNING nova.compute.manager [req-6ee0e137-28ab-40a7-bf29-2da1260ca7b1 - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.
2021-10-18 14:40:29.104 32686 INFO nova.compute.manager [req-aa942b4b-e3c0-4cdb-8165-c6a5a6d3790d 58f19303ac3049688eff9d8ef041956d 9c2c9ed68ba242fdb8206bf573aed265 - default default] [instance: de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance is already powered off in the hypervisor when stop is called.
2021-10-18 14:40:29.165 32686 INFO nova.virt.libvirt.driver [-] [instance: de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance destroyed successfully.
2021-10-18 14:40:35.365 32686 INFO nova.virt.libvirt.driver [-] [instance: de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance destroyed successfully.
2021-10-18 14:40:35.427 32686 INFO os_vif [req-9d755643-a02b-4464-b549-35c6dffd512e 58f19303ac3049688eff9d8ef041956d 9c2c9ed68ba242fdb8206bf573aed265 - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:03:56,bridge_name='br-int',has_traffic_filtering=True,id=3a71aa63-6a39-41d8-9602-04b84834db9e,network=Network(96b506b1-5b9-4ab6-9ef6-8f25c49e9123),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a71aa63-6a')

Environment
===========

I tested this and I see this problem in Ussuri.

# dpkg -l | grep nova

ii nova-common 2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all OpenStack Compute - common files
ii nova-compute 2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1~cloud0 all client library for OpenStack Compute API - 3.x

We are using Qemu + kvm + livirt:

ii libvirt-clients 6.0.0-0ubuntu8.12~cloud0 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.12~cloud0 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.12~cloud0 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.12~cloud0 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.12~cloud0 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.12~cloud0 amd64 library for interfacing with different virtualization systems
ii python3-libvirt 6.1.0-1~cloud0 amd64 libvirt Python 3 bindings

ii qemu-kvm 1:4.2-3ubuntu6.18+syseleven0 amd64 QEMU Full virtualization on x86 hardware

We use shared storage (Quobyte).
We use Queens with Midonet and Ussuri with OVN.

description: updated
Revision history for this message
Olaf Seibert (oseibert-sys11) wrote :
Download full text (3.5 KiB)

I tried it on Queens (also running on Ubuntu Bionic), and there it worked (a lot better at least)!

One thing that didn't work well was disk I/O (ssh-ing to the migrated and unpasused VM showed network I/O but never completed) but migrating it again when not paused fixed it.

So somehow Ussuri has a rather big regression compared to Queens.

Versions for Queens:

ii libvirt-clients 4.0.0-1ubuntu8.19 amd64 Programs for the libvirt library
ii libvirt-daemon 4.0.0-1ubuntu8.19 amd64 Virtualization daemon
ii libvirt-daemon-system 4.0.0-1ubuntu8.19 amd64 Libvirt daemon configuration files
ii libvirt0:amd64 4.0.0-1ubuntu8.19 amd64 library for interfacing with different virtualization systems
ii nova-common 2:17.0.13-0ubuntu1+syseleven4~bionic~202110151347.gbp8e8fc9 all OpenStack Compute - common files
ii nova-compute 2:17.0.13-0ubuntu1+syseleven4~bionic~202110151347.gbp8e8fc9 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:17.0.13-0ubuntu1+syseleven4~bionic~202110151347.gbp8e8fc9 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:17.0.13-0ubuntu1+syseleven4~bionic~202110151347.gbp8e8fc9 all OpenStack Compute - compute node libvirt support
ii python-libvirt 4.0.0-1 amd64 libvirt Python bindings
ii python-nova 2:17.0.13-0ubuntu1+syseleven4~bionic~202110151347.gbp8e8fc9 all OpenStack Compute Python libraries
ii python-novaclient 2:9.1.1-0ubuntu1 all client library for OpenStack Compute API - Python 2.7
ii python3-libvirt 4.0.0-1 amd64 libvirt Python 3 bindings
ii python3-novaclient 2:9.1.1-0ubuntu1 all client library for OpenStack Compute API - 3.x
ii qemu 1:2.11+dfsg-1ubuntu7.38 amd64 fast processor emulator
ii qemu-block-extra:amd64 1:2.11+dfsg-1ubuntu7.38 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:2.11+dfsg-1ubuntu7.38 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system 1:2.11+dfsg-1ubuntu7.38 amd64 QEMU full system emulation binaries
ii qemu-system-common 1:2.11+dfsg-1ubuntu7.38 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-misc 1:2.11+dfsg-1ubuntu7.38 ...

Read more...

Revision history for this message
Olaf Seibert (oseibert-sys11) wrote :

During / after the second migration, libvirtd logs this:

Oct 22 15:31:31 ybk140931 ovs-vsctl[21412]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port tap3a71aa63-6a -- add-port br-int tap3a71aa63-6a -- set Interface tap3a71aa63-6a "external-ids:attached-mac=\"fa:16:3e:da:03:56\"" -- set Interface tap3a71aa63-6a "external-ids:iface-id=\"3a71aa63-6a39-41d8-9602-04b84834db9e\"" -- set Interface tap3a71aa63-6a "external-ids:vm-id=\"de2b27d2-345c-45fc-8f37-2fa0ed1a1151\"" -- set Interface tap3a71aa63-6a external-ids:iface-status=active

Oct 22 15:32:58 ybk140931 libvirtd[3237]: Unable to read from monitor: Connection reset by peer
Oct 22 15:32:59 ybk140931 ovs-vsctl[22001]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port tap3a71aa63-6a
Oct 22 15:32:59 ybk140931 libvirtd[3237]: operation failed: domain is not running
Oct 22 15:32:59 ybk140931 libvirtd[3237]: internal error: qemu unexpectedly closed the monitor: 2021-10-22T15:32:58.845667Z qemu-system-x86_64: Failed to load virtio_pci/modern_queue_state:used
                                          2021-10-22T15:32:58.845687Z qemu-system-x86_64: Failed to load virtio_pci/modern_state:vqs
                                          2021-10-22T15:32:58.845690Z qemu-system-x86_64: Failed to load virtio/extra_state:extra_state
                                          2021-10-22T15:32:58.845692Z qemu-system-x86_64: Failed to load virtio-rng:virtio
                                          2021-10-22T15:32:58.845695Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:06.0/virtio-rng'
                                          2021-10-22T15:32:58.847860Z qemu-system-x86_64: load of migration failed: Input/output error
Oct 22 15:32:59 ybk140931 libvirtd[3237]: operation failed: domain 'instance-00000ba2' is not running

Revision history for this message
Sylvain Bauza (sylvain-bauza) wrote :

Are you sure the instance is fully live-migrated once you issue the second live-migrate ?

What's the exact migration status you have for the instance before you call twice ?

Changed in nova:
status: New → Incomplete
Revision history for this message
Olaf Seibert (oseibert-sys11) wrote :
Download full text (3.4 KiB)

Yes I am sure it is fully migrated.

The status after the first live migration looks like this (this is from another run, not the same as above)

+-------------------------------------+---------------------------------------------+
| Field | Value |
+-------------------------------------+---------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | ybk1 |
| OS-EXT-SRV-ATTR:host | ybk140913.ybk.sys11cloud.net |
| OS-EXT-SRV-ATTR:hypervisor_hostname | ybk140913.ybk.sys11cloud.net |
| OS-EXT-SRV-ATTR:instance_name | instance-00000032 |
| OS-EXT-STS:power_state | Paused |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | paused |
| OS-SRV-USG:launched_at | 2021-10-28T12:59:48.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | net_red=192.168.1.8, 185.46.136.233 |
| config_drive | |
| created | 2021-10-28T12:59:23Z |
| flavor | dennis-test |
| | (399be5a7-45a8-42a5-9bab-07652a440cb9) |
| hostId | b970c76a388accafd6f061058b9e9f37ff0a8851847 |
| | ee520a742017b |
| id | 39a725de-8e1b-4e63-912f-4b322addff47 |
| image | NetBSD-9.99.86-amd64-live |
| | (bc15e7c9-916e-4f1b-a589-bdf07b7b4565) |
| key_name | kp_adminuser |
| name | RED Instance |
| project_id | 7e29ac61259c4f43926c247f00b9824d |
| properties | |
| security_groups | name='simple_sg' |
| status | PAUSED |
| updated | 2021-10-29T14:33:03Z |
| user_id | 4b4e92dc65874d45ba898fb9cfd0c705 |
| volumes_attached | |
+-------------------------------------+---------------------------------------------+

I did some further digging and it seems the...

Read more...

Revision history for this message
Balazs Gibizer (balazs-gibizer) wrote (last edit ):

I was able to reproduce this issue recently https://paste.opendev.org/show/bswQVIMJddVkrUtC1Pk0/ with qemu 8.1.0. Which is not a surprise as the related qemu bug is still open https://gitlab.com/qemu-project/qemu/-/issues/686

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.