Tempest resize test fails with kvm/libvirt

Bug #940619 reported by David Kranz
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

The following test fails because the vm gets stuck in RESIZE. Possibly related, an error is showing up in the nova-compute log. This is with current devstack running on real hardware. I know this did not work in diablo but the essex blueprint says the code is in.

    def test_resize_server_confirm(self):
        """
        The server's RAM and disk space should be modified to that of
        the provided flavor
        """

        resp, server = self.client.resize(self.server_id, self.flavor_ref_alt)
        self.assertEqual(202, resp.status)
        self.client.wait_for_server_status(self.server_id, 'VERIFY_RESIZE')

        self.client.confirm_resize(self.server_id)
        self.client.wait_for_server_status(self.server_id, 'ACTIVE')

        resp, server = self.client.get_server(self.server_id)
        self.assertEqual(self.flavor_ref_alt, server['flavor']['id'])

2012-02-24 15:49:41 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rescued_instances from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:41 DEBUG nova.manager [-] Skipping ComputeManager._sync_power_states, 1 ticks left until next run from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:147
2012-02-24 15:49:41 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:41 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:51 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000003/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:52 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000003/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:52 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:52 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:52 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk.local from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:52 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk.local from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:53 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000002/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:53 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000002/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:49:53 INFO nova.virt.libvirt.connection [-] Compute_service record updated for xg08
2012-02-24 15:49:53 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:53 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 30 ticks left until next run from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:147
2012-02-24 15:49:53 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:53 DEBUG nova.rpc.common [-] Making asynchronous call on network ... from (pid=3967) multicall /opt/stack/nova/nova/rpc/amqp.py:319
2012-02-24 15:49:53 DEBUG nova.rpc.common [-] MSG_ID is a8f54c24f0ab4cfe8073f9cb69d4065e from (pid=3967) multicall /opt/stack/nova/nova/rpc/amqp.py:322
2012-02-24 15:49:53 DEBUG nova.compute.manager [-] Updated the info_cache for instance 541b1bec-7bc3-4507-a436-15b361fde8e5 from (pid=3967) _heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:2158
2012-02-24 15:49:53 DEBUG nova.manager [-] Skipping ComputeManager._run_image_cache_manager_pass, 3569 ticks left until next run from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:147
2012-02-24 15:49:53 DEBUG nova.manager [-] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:53 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=3967) _reclaim_queued_deletes /opt/stack/nova/nova/compute/manager.py:2275
2012-02-24 15:49:53 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:49:53 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:50:53 DEBUG nova.manager [-] Running periodic task ComputeManager._publish_service_capabilities from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:50:53 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=3967) _publish_service_capabilities /opt/stack/nova/nova/manager.py:203
2012-02-24 15:50:53 DEBUG nova.rpc.common [-] Making asynchronous fanout cast... from (pid=3967) fanout_cast /opt/stack/nova/nova/rpc/amqp.py:352
2012-02-24 15:50:53 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rescued_instances from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:50:53 DEBUG nova.manager [-] Running periodic task ComputeManager._sync_power_states from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:50:55 ERROR nova.manager [-] Error during ComputeManager._sync_power_states: 'dict' object has no attribute 'state'
(nova.manager): TRACE: Traceback (most recent call last):
(nova.manager): TRACE: File "/opt/stack/nova/nova/manager.py", line 155, in periodic_tasks
(nova.manager): TRACE: task(self, context)
(nova.manager): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 2250, in _sync_power_states
(nova.manager): TRACE: vm_power_state = vm_instance.state
(nova.manager): TRACE: AttributeError: 'dict' object has no attribute 'state'
(nova.manager): TRACE:
2012-02-24 15:50:55 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:50:55 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:51:05 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000003/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:05 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000003/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:06 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:06 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:06 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk.local from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:06 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000001/disk.local from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:07 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000002/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:07 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /opt/stack/nova/nova/..//instances/instance-00000002/disk from (pid=3967) execute /opt/stack/nova/nova/utils.py:209
2012-02-24 15:51:07 INFO nova.virt.libvirt.connection [-] Compute_service record updated for xg08
2012-02-24 15:51:07 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:51:07 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 29 ticks left until next run from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:147
2012-02-24 15:51:07 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=3967) periodic_tasks /opt/stack/nova/nova/manager.py:152
2012-02-24 15:51:07 DEBUG nova.rpc.common [-] Making asynchronous call on network ... from (pid=3967) multicall /opt/stack/nova/nova/rpc/amqp.py:319
2012-02-24 15:51:07 DEBUG nova.rpc.common [-] MSG_ID is 032f86856ced42b1af189a5ed8a04e1c from (pid=3967) multicall /opt/stack/nova/nova/rpc/amqp.py:322
2012-02-24 15:51:07 DEBUG nova.compute.manager [-] Updated the info_cache for instance 81633553-b1b3-4366-b30b-4e833f15a4a7 from (pid=3967) _heal_i

Revision history for this message
David Kranz (david-kranz) wrote :

It turns out that this is a problem with the test. According to Vish, libvirt resize was fixed in Essex but requires the
allow_resize_to_same_host=True
if using a single node, or the ability for nova user to scp between machines in multinode.

Changed in nova:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.