live snapshot of a running instance with cinder volume (nfs) attached fails

Bug #1993555 reported by Rafael Madrid
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
Undecided
Unassigned

Bug Description

Description
===========
When creating a live snapshot of a running instance that has a Cinder volume (Quobyte or NFS backend) attached, the snapshot creation fails.

Our images have the qemu-agent installed and the properties hw_qemu_guest_agent=yes and os_require_quiesce=yes. We have confirmed that the freeze and thaw commands work as expected.

Steps to reproduce (from Horizon)
==================
1. Create a volume-backed (on Quobyte or nfs-based storage) instance.
2. Try to create a snapshot of the running instance.

Expected result
===============
The snapshot is created successfully with status Available. Users should be able to create new instances from that snapshot.

Actual result
=============
The snapshot is created but has an Error status. It cannot be used to launch new instances.

Environment
===========
1. OpenStack Release: Xena
2. Hypervisor: Libvirt + KVM
3. Cinder Storage: Quobyte, NFS

Logs
==============
Nova Compute Log

2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [req-43572110-44ac-4ec0-8e00-9b713cf7c71e 899761a635c847e483536855cd6a9af9 3bfb8905aa84474c9e8611749f5f5329 - default default] [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] Error occurred during volume_snapshot_create, sending error status to Cinder.: libvirt.libvirtError: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] Traceback (most recent call last):
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3417, in volume_snapshot_create
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] volume_id, create_info['new_file'])
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3348, in _volume_snapshot_create
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] reuse_ext=True, quiesce=True)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 550, in snapshot
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] self._domain.snapshotCreateXML(device_xml, flags=flags)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 193, in doit
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] result = proxy_call(self._autowrap, f, *args, **kwargs)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 151, in proxy_call
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] rv = execute(f, *args, **kwargs)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 132, in execute
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] six.reraise(c, e, tb)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] raise value
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 86, in tworker
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] rv = meth(*args, **kwargs)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3070, in snapshotCreateXML
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] raise libvirtError('virDomainSnapshotCreateXML() failed')
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] libvirt.libvirtError: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98]
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server [req-43572110-44ac-4ec0-8e00-9b713cf7c71e 899761a635c847e483536855cd6a9af9 3bfb8905aa84474c9e8611749f5f5329 - default default] Exception during message handling: libvirt.libvirtError: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 241, in inner
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server return func(*args, **kwargs)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/exception_wrapper.py", line 72, in wrapped
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server context, exc, binary)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server self.force_reraise()
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server raise self.value
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/exception_wrapper.py", line 63, in wrapped
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 4013, in volume_snapshot_create
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server create_info)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3424, in volume_snapshot_create
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server context, snapshot_id, 'error')
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server self.force_reraise()
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server raise self.value
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3417, in volume_snapshot_create
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server volume_id, create_info['new_file'])
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3348, in _volume_snapshot_create
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server reuse_ext=True, quiesce=True)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 550, in snapshot
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server self._domain.snapshotCreateXML(device_xml, flags=flags)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 193, in doit
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server result = proxy_call(self._autowrap, f, *args, **kwargs)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 151, in proxy_call
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server rv = execute(f, *args, **kwargs)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 132, in execute
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server six.reraise(c, e, tb)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server raise value
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 86, in tworker
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server rv = meth(*args, **kwargs)
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3070, in snapshotCreateXML
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server raise libvirtError('virDomainSnapshotCreateXML() failed')
2022-10-18 21:03:20.967 7 ERROR oslo_messaging.rpc.server libvirt.libvirtError: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance

---------------------------

Cinder Logs
2022-10-18 21:03:20.284 110 WARNING cinder.volume.drivers.remotefs [req-1399bacd-19d0-4627-897b-e2661f1072d0 c83b20bf05a74781aed3f71d5754016e 09db9c0e63c14e8284d2bd0c25808adb - - -] /var/lib/cinder/mnt/e4ffe64b9de2f7551ebb601d06e7891b/volume-07df969b-b237-498b-8bbb-0b064e273f07.2fad5e25-92ff-4bb0-b6c7-8a63b4631806 is being set with open permissions: ugo+rw
2022-10-18 21:03:21.942 110 INFO cinder.message.api [req-1399bacd-19d0-4627-897b-e2661f1072d0 c83b20bf05a74781aed3f71d5754016e 09db9c0e63c14e8284d2bd0c25808adb - - -] Creating message record for request_id = req-1399bacd-19d0-4627-897b-e2661f1072d0
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server [req-1399bacd-19d0-4627-897b-e2661f1072d0 c83b20bf05a74781aed3f71d5754016e 09db9c0e63c14e8284d2bd0c25808adb - - -] Exception during message handling: cinder.exception.RemoteFSException: Nova returned "error" status while creating snapshot.
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "<decorator-gen-755>", line 2, in create_snapshot
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/objects/cleanable.py", line 208, in wrapper
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/volume/manager.py", line 1233, in create_snapshot
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server detail=message_field.Detail.SNAPSHOT_CREATE_ERROR)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server self.force_reraise()
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server raise self.value
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/volume/manager.py", line 1218, in create_snapshot
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server model_update = self.driver.create_snapshot(snapshot)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "<decorator-gen-783>", line 2, in create_snapshot
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/coordination.py", line 186, in _synchronized
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server return f(*a, **k)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/volume/drivers/nfs.py", line 591, in create_snapshot
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server return self._create_snapshot(snapshot)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/volume/drivers/remotefs.py", line 1659, in _create_snapshot
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server new_snap_path)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.6/site-packages/cinder/volume/drivers/remotefs.py", line 1722, in _create_snapshot_online
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server raise exception.RemoteFSException(msg)
2022-10-18 21:03:21.949 110 ERROR oslo_messaging.rpc.server cinder.exception.RemoteFSException: Nova returned "error" status while creating snapshot.

Revision history for this message
Balazs Gibizer (balazs-gibizer) wrote :

Could you try to reproduce the error with virsh commands?

1. Create a VM as in your original reproduction.
2. Instead of creating a snapshot from Horizon. Go the compute where the instance is running and try to quiesce the domain with virsh:
  virsh domfsfreeze <name of the domain>

3. If #2. is successful then try unquiesce it with
  virsh domfsthaw <name of the domain>

4. If both #2. and #3. is successful then try to create a snapshot the same way as you did in your original reproduction.

Changed in nova:
status: New → Incomplete
Revision history for this message
Balazs Gibizer (balazs-gibizer) wrote :

The error

Error occurred during volume_snapshot_create, sending error status to Cinder.: libvirt.libvirtError: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance

can mean that the guest agent or qemu thinks that the fs is already freezed.

Revision history for this message
Rafael Madrid (rmadridr) wrote :

Balazs, I tried the steps you said, and although steps #1 and #2 were successful, the snapshot still failed with the same error.

Revision history for this message
Rafael Madrid (rmadridr) wrote :

I was able to confirm that nova (not sure if nova api) quiesce the domain before attempting to call volume_snapshot_create. Then, on the volume_snapshot_create function, nova calls libvirt virDomainSnapshotCreateXML() with the flag VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE=true, but this call fails because the domain was frozen already. I believe that's why the snapshot ends up failing.

Here is an example log (I manually added first line to debug the issue):
2022-10-28 14:22:49.968 7 INFO nova.virt.libvirt.driver [req-b642987d-b4ce-4a84-a080-18dd6e3a3c62 c83b20bf05a74781aed3f71d5754016e 09db9c0e63c14e8284d2bd0c25808adb - default default] [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] Quiesce called from quiesce_instance
2022-10-28 14:22:51.481 7 DEBUG nova.virt.libvirt.driver [req-98e61afa-5ef6-4717-aea4-c80193880df2 899761a635c847e483536855cd6a9af9 3bfb8905aa84474c9e8611749f5f5329 - default default] [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] volume_snapshot_create: create_info: {'type': 'qcow2', 'new_file': 'volume-07df969b-b237-498b-8bbb-0b064e273f07.077fe32d-f93c-4fd0-8a1d-c48e13b71dfd', 'snapshot_id': '077fe32d-f93c-4fd0-8a1d-c48e13b71dfd'} volume_snapshot_create /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:3401
2022-10-28 14:22:51.483 7 DEBUG nova.virt.libvirt.driver [req-98e61afa-5ef6-4717-aea4-c80193880df2 899761a635c847e483536855cd6a9af9 3bfb8905aa84474c9e8611749f5f5329 - default default] [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] snap xml: <domainsnapshot>
  <disks>
    <disk name="/var/lib/nova/mnt/e4ffe64b9de2f7551ebb601d06e7891b/volume-07df969b-b237-498b-8bbb-0b064e273f07" snapshot="external" type="file">
      <source file="/var/lib/nova/mnt/e4ffe64b9de2f7551ebb601d06e7891b/volume-07df969b-b237-498b-8bbb-0b064e273f07.077fe32d-f93c-4fd0-8a1d-c48e13b71dfd"/>
    </disk>
  </disks>
</domainsnapshot>
 _volume_snapshot_create /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:3341
2022-10-28 14:22:51.483 7 DEBUG nova.objects.instance [req-98e61afa-5ef6-4717-aea4-c80193880df2 899761a635c847e483536855cd6a9af9 3bfb8905aa84474c9e8611749f5f5329 - default default] Lazy-loading 'system_metadata' on Instance uuid b2d46a43-c6c7-4dd7-b524-0b018b689d98 obj_load_attr /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101
2022-10-28 14:22:51.508 7 ERROR nova.virt.libvirt.driver [req-98e61afa-5ef6-4717-aea4-c80193880df2 899761a635c847e483536855cd6a9af9 3bfb8905aa84474c9e8611749f5f5329 - default default] [instance: b2d46a43-c6c7-4dd7-b524-0b018b689d98] Error occurred during volume_snapshot_create, sending error status to Cinder.: libvirt.libvirtError: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance

The snapshot succeeds if I temporarily set VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE=false, but I don't know if this is safe. What would be the best approach to avoid this double quiesce attempt?

Rafael Madrid (rmadridr)
Changed in nova:
status: Incomplete → New
Changed in nova:
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.