@chiluk I can confirm that the flag provided doesnt work on ubuntu 16.04 with mitaka packages. The first live migration to a compute node works, but if i try immediately to live migrate it back to the same compute node, or to other compute node, the migration fails sometimes geting on the new node as "Shut Off" with this log on nova-compute 2017-02-06 17:11:20.457 12456 ERROR nova.virt.libvirt.driver [req-231bec41-7937-4f3c-ab02-7b9a985cad22 66f68888afb1424a874a0fae3c5c5e52 3d7374bb9d4b4ad6a7db51a5187483a2 - - -] [instance: a3c0256b-290a-4345-865d-d31b7c79894d] Live Migration failure: operation failed: job: unexpectedly failed And most of the time when you try to put the instance back on the previous compute node, the migration finishes as successfull, but does not changes compute node, with this logs on nova compute: 2017-02-06 17:41:42.275 10142 ERROR nova.compute.manager [instance: a3c0256b-290a-4345-865d-d31b7c79894d] RemoteError: Remote error: libvirtError Requested operation is not valid: transient domains do not have any persistent config 2017-02-06 17:41:42.275 10142 ERROR nova.compute.manager [instance: a3c0256b-290a-4345-865d-d31b7c79894d] [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply\n incoming.message))\n', u' File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch\n result = func(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 110, in wrapped\n payload)\n', u' File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__\n self.force_reraise()\n', u' File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped\n return f(self, context, *args, **kw)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5052, in remove_volume_connection\n self._driver_detach_volume(context, instance, bdm, connection_info)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4802, in _driver_detach_volume\n self.volume_api.roll_detaching(context, volume_id)\n', u' File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__\n self.force_reraise()\n', u' File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4790, in _driver_detach_volume\n encryption=encryption)\n', u' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1459, in detach_volume\n live=live)\n', u' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 327, in detach_device_with_retry\n self.detach_device(conf, persistent, live)\n', u' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 357, in detach_device\n self._domain.detachDeviceFlags(conf.to_xml(), flags=flags)\n', u' File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call\n rv = execute(f, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute\n six.reraise(c, e, tb)\n', u' File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker\n rv = meth(*args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1190, in detachDeviceFlags\n if ret == -1: raise libvirtError (\'virDomainDetachDeviceFlags() failed\', dom=self)\n', u'libvirtError: Requested operation is not valid: transient domains do not have any persistent config\n']. 2017-02-06 17:41:42.275 10142 ERROR nova.compute.manager [instance: a3c0256b-290a-4345-865d-d31b7c79894d]