xenapi: migration error - original exception not logged

Bug #1234604 reported by Mate Lakat
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Low
John Garbutt

Bug Description

If there is a non-properly installed host (maybe the /images directory is not created, or ssh was not setup), the following tempest test fails:

nosetests tempest.api.compute.servers.test_server_actions:ServerActionsTestXML.test_resize_server_confirm

This is the log entry from n-cpu:
[req-ee3f693d-2d80-4f7a-927f-19ae6264fd7b ServerActionsTestXML1666436677-user ServerActionsTestXML1666436677-tenant] Exception during message handling
Traceback (most recent call last):
  File "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
    **args)
  File "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
    result = getattr(proxyobj, method)(ctxt, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 353, in decorated_function
    return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/exception.py", line 90, in wrapped
    payload)
  File "/opt/stack/nova/nova/exception.py", line 73, in wrapped
    return f(self, context, *args, **kw)
  File "/opt/stack/nova/nova/compute/manager.py", line 243, in decorated_function
    pass
  File "/opt/stack/nova/nova/compute/manager.py", line 229, in decorated_function
    return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 294, in decorated_function
    function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 271, in decorated_function
    e, sys.exc_info())
  File "/opt/stack/nova/nova/compute/manager.py", line 258, in decorated_function
    return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 3021, in resize_instance
    block_device_info)
  File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 284, in migrate_disk_and_power_off
    dest, instance_type, block_device_info)
  File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 939, in migrate_disk_and_power_off
    context, instance, dest, vm_ref, sr_path)
  File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 882, in _migrate_disk_resizing_up
    self._migrate_vhd(instance, vdi_uuid, dest, sr_path, seq_num)
  File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 770, in _migrate_vhd
    raise exception.MigrationError(reason=msg)
MigrationError: Migration error: Failed to transfer vhd to new host

It would be good to see what was the actual error message.

Tags: xenserver
Revision history for this message
Mate Lakat (mate-lakat) wrote :

This is what I see in compute log:

/usr/bin/rsync -av --progress -e ssh -o StrictHostKeyChecking=no /var/run/sr-mount/48c0ce72-d125-403a-40ae-dbfda62ca9fa/tmp44w5G5/ root@169.254.0.1:/images/instance0194ea30-1bb8-4376-861e-782782f9bba5/

Revision history for this message
John Garbutt (johngarbutt) wrote :

might be good to say about what log level you are using?

Changed in nova:
importance: Undecided → Low
status: New → Triaged
Revision history for this message
John Garbutt (johngarbutt) wrote :
Changed in nova:
status: Triaged → Fix Committed
assignee: nobody → John Garbutt (johngarbutt)
Changed in nova:
milestone: none → icehouse-rc1
Thierry Carrez (ttx)
Changed in nova:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: icehouse-rc1 → 2014.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.