Activity log for bug #1357599

Date Who What changed Old value New value Message
2014-08-16 00:34:03 Aaron Rosen bug added bug
2014-08-16 00:34:41 Aaron Rosen nova: assignee Aaron Rosen (arosen)
2014-08-16 00:34:50 Aaron Rosen nova: importance Undecided High
2014-08-22 18:07:17 OpenStack Infra nova: status New In Progress
2014-09-04 22:18:43 Joe Gordon nova: milestone juno-3
2014-09-05 09:31:31 Thierry Carrez nova: milestone juno-3 juno-rc1
2014-09-05 19:41:42 OpenStack Infra nova: status In Progress Fix Committed
2014-09-06 17:47:53 OpenStack Infra tags in-stable-icehouse
2014-09-11 21:38:19 Sean Dague nova: status Fix Committed Confirmed
2014-09-11 21:38:27 Sean Dague nova: status Confirmed Fix Committed
2014-09-17 10:48:21 luozhipeng description The tempest test that does a resize on the instance from time to time fails with a neutron virtual interface timeout error. The reason why this is occurring is because resize_instance() calls: disk_info = self.driver.migrate_disk_and_power_off( context, instance, migration.dest_host, instance_type, network_info, block_device_info) which calls destory() which unplugs the vifs(). Then, self.driver.finish_migration(context, migration, instance, disk_info, network_info, image, resize_instance, block_device_info, power_on) is called which expects a vif_plugged event. Since this happens on the same host the neutron agent is able to detect that the vif was unplugged then plugged because it happens so fast. To fix this we should check if we are migrating to the same host if we are we should not expect to get an event. 8d1] Setting instance vm_state to ERROR 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last): 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, self.tb) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in finish_migration 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in _create_domain_and_network 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise exception.VirtualInterfaceCreateException() 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual Interface creation failed The tempest test that does a resize on the instance from time to time fails with a neutron virtual interface timeout error. The reason why this is occurring is because resize_instance() calls:             disk_info = self.driver.migrate_disk_and_power_off(                     context, instance, migration.dest_host,                     instance_type, network_info,                     block_device_info) which calls destory() which unplugs the vifs(). Then,             self.driver.finish_migration(context, migration, instance,                                          disk_info,                                          network_info,                                          image, resize_instance,                                          block_device_info, power_on) is called which expects a vif_plugged event. Since this happens on the same host the neutron agent is able to detect that the vif was unplugged then plugged because it happens so fast. To fix this we should check if we are migrating to the same host if we are we should not expect to get an event. 8d1] Setting instance vm_state to ERROR 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last): 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, self.tb) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in finish_migration 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on) 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in _create_domain_and_network 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise exception.VirtualInterfaceCreateException() 2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual Interface creation failed
2014-09-29 19:59:10 Adam Gandelman nominated for series nova/icehouse
2014-09-29 19:59:11 Adam Gandelman bug task added nova/icehouse
2014-09-29 20:24:48 Adam Gandelman nova/icehouse: importance Undecided High
2014-09-29 20:24:48 Adam Gandelman nova/icehouse: status New Fix Committed
2014-09-29 20:24:48 Adam Gandelman nova/icehouse: milestone 2014.1.3
2014-09-30 15:55:44 Nobuto Murata bug added subscriber Nobuto MURATA
2014-10-01 07:37:21 Thierry Carrez nova: status Fix Committed Fix Released
2014-10-01 21:28:21 Adam Gandelman nova/icehouse: assignee Aaron Rosen (arosen)
2014-10-02 23:47:14 Adam Gandelman nova/icehouse: status Fix Committed Fix Released
2014-10-16 08:54:58 Thierry Carrez nova: milestone juno-rc1 2014.2