"nova host-servers-migrate <host>" Migrate instances on free compute in error status

Bug #1274183 reported by Egor Kotko
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Unassigned

Bug Description

{"build_id": "2014-01-29_13-30-05", "ostf_sha": "338ddf840c229918d1df8c6597588b853d02de4c", "build_number": "67", "nailgun_sha": "3463912a986465133058a24c615c3548cef53cac", "fuelmain_sha": "7d8768f2ac7e1e54d16c135e4ebd64722e49179e", "astute_sha": "200f68381327d955428c371582c03a97bfec3154", "release": "4.1", "fuellib_sha": "73e74f0c449ad86b3da922c8bd5eb333eac94489"}

"nova host-servers-migrate <host>" Migrate instances on free compute in error status

Steps to reproduce
1. Get ISO#67
2. Cluster configuration: Ubuntu, simple, 1Controller, (2+ceph)Computes, Ceph for images, Neutron GRE.
3. Create Instance check on wich compute it is.
4. On controller execute command "nova --debug host-servers-migrate <host>" host should be the name of compute with created instance.

Expected result:
Instance should migrate and be in Active status

Actual result:
After migration instance became in Error state.

compute node on wich should be migrated this instance contain following:

<180>Jan 29 16:12:16 node-3 nova-nova.compute.manager AUDIT: Migrating
<182>Jan 29 16:12:18 node-3 nova-nova.virt.libvirt.driver INFO: Creating image
<179>Jan 29 16:12:19 node-3 nova-nova.virt.libvirt.imagebackend ERROR: error opening rbd image /var/lib/nova/instances/_base/4e3fb6726c8ee72072724a16179d5e400c712864
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 467, in __init__
    read_only=read_only)
  File "/usr/lib/python2.7/dist-packages/rbd.py", line 351, in __init__
    raise make_ex(ret, 'error opening image %s at snapshot %s' % (name, snapshot))
ImageNotFound: error opening image /var/lib/nova/instances/_base/4e3fb6726c8ee72072724a16179d5e400c712864 at snapshot None
<179>Jan 29 16:12:19 node-3 nova-nova.compute.manager ERROR: Setting instance vm_state to ERROR
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3153, in finish_resize
    disk_info, image)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3121, in _finish_resize
    block_device_info, power_on)

Tags: compute rbd
Revision history for this message
Egor Kotko (ykotko) wrote :
Egor Kotko (ykotko)
tags: added: compute
removed: nova-manage
melanie witt (melwitt)
Changed in nova:
importance: Undecided → High
status: New → Confirmed
Matt Riedemann (mriedem)
tags: added: rbd
Revision history for this message
Xav Paice (xavpaice) wrote :

This can also be reproduced by:

* Havana 2013.2.2, 3 ceph/compute nodes, 1 controller. Glance uses RBD for image store, so does Cinder.
* no shared storage under /var/lib/nova/instances
* create an instance from an image
* nova migrate <instance>
* resulting in the instance being in 'error' state, and the same exception as noted in the original report.

Note that nova live-migration works fine, but we noted the problem when attempting to resize an instance.

The image listed in the exception under var/lib/nova/instances/_base/ does not appear to be the uuid of any image listed by nova image-list.

Revision history for this message
Dmitry Borodaenko (angdraug) wrote :

I can't reproduce this bug on Icehouse version of Nova patched with this patch series:
https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse

I suspect that it's yet another bug fixed by:
https://review.openstack.org/91722

Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote :

Per #3, this no longer seems to be the problem. please reopen if necessary

Changed in nova:
status: Confirmed → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → juno-rc1
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: juno-rc1 → 2014.2
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.