Activity log for bug #1964940

Date Who What changed Old value New value Message
2022-03-15 12:28:36 chandan kumar bug added bug
2022-03-15 13:30:50 chandan kumar description On Fs001 CentOS Stream 9 wallaby, Multiple compute server tempest tests are failing with following error [1]: ``` {1} tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server [335.060967s] ... FAILED Captured traceback: ~~~~~~~~~~~~~~~~~~~ Traceback (most recent call last): File "/usr/lib/python3.9/site-packages/tempest/api/compute/images/test_images.py", line 99, in test_create_image_from_paused_server server = self.create_test_server(wait_until='ACTIVE') File "/usr/lib/python3.9/site-packages/tempest/api/compute/base.py", line 270, in create_test_server body, servers = compute.create_test_server( File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 267, in create_test_server LOG.exception('Server %s failed to delete in time', File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ self.force_reraise() File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise raise self.value File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 237, in create_test_server waiters.wait_for_server_status( File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 100, in wait_for_server_status raise lib_exc.TimeoutException(message) tempest.lib.exceptions.TimeoutException: Request timed out Details: (ImagesTestJSON:test_create_image_from_paused_server) Server 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1 failed to reach ACTIVE status and task state "None" within the required time (300 s). Server boot request ID: req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b. Current status: BUILD. Current task state: spawning. ``` Below is the list of other tempest tests failing on the same job.[2] ``` tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server[id-71bcb732-0261-11e7-9086-fa163e4fa634] tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494] tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume[id-d0f3f0d6-d9b6-4a32-8da4-23015dcab23c,volume] tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesV270Test.test_create_get_list_interfaces[id-2853f095-8277-4067-92bd-9f10bd4f8e0c,network] tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_shelved_state[id-bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad] setUpClass (tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON) tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest_v242.test_tagged_boot_devices[id-a2e65a6c-66f1-4442-aaa8-498c31778d96,image,network,slow,volume] tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_suspended_state[id-1f82ebd3-8253-4f4e-b93f-de9b7df56d8b] tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces_by_network_port[id-73fe8f02-590d-4bf1-b184-e9ca81065051,network] setUpClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSONUnderV235) ``` Here is the traceback from nova-compute logs [3], ``` 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b d5ea6c724785473b8ea1104d70fb0d14 64c7d31d84284a28bc9aaa4eaad2b9fb - default default] [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Instance failed to spawn: nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last): 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7231, in _create_guest_with_network 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] guest = self._create_guest( 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__ 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] next(self.gen) 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 479, in wait_for_instance_event 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] actual_event = event.wait() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] result = hub.switch() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] return self.greenlet.switch() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] eventlet.timeout.Timeout: 300 seconds 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] During handling of the above exception, another exception occurred: 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last): 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2640, in _build_resources 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] yield resources 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] self.driver.spawn(context, instance, image_meta, 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4193, in spawn 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] self._create_guest_with_network( 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7257, in _create_guest_with_network 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] raise exception.VirtualInterfaceCreateException() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] ``` This job https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby is broken from 13th Mar, 2021 and earlier https://bugs.launchpad.net/tripleo/+bug/1960310 is also seen on this. In each run, tempest tests failures are different. So filing this bug from the last run for debugging. Logs: [1]. https://logserver.rdoproject.org/17/40517/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/94e16ac/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz [2]. https://logserver.rdoproject.org/17/40517/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/94e16ac/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz [3]. https://logserver.rdoproject.org/17/40517/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/94e16ac/logs/overcloud-novacompute-0/var/log/containers/nova/nova-compute.log.1.gz On Fs001 CentOS Stream 9 wallaby, Multiple compute server tempest tests are failing with following error [1][2]: ``` {1} tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server [335.060967s] ... FAILED Captured traceback: ~~~~~~~~~~~~~~~~~~~     Traceback (most recent call last):       File "/usr/lib/python3.9/site-packages/tempest/api/compute/images/test_images.py", line 99, in test_create_image_from_paused_server         server = self.create_test_server(wait_until='ACTIVE')       File "/usr/lib/python3.9/site-packages/tempest/api/compute/base.py", line 270, in create_test_server         body, servers = compute.create_test_server(       File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 267, in create_test_server         LOG.exception('Server %s failed to delete in time',       File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__         self.force_reraise()       File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise         raise self.value       File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 237, in create_test_server         waiters.wait_for_server_status(       File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 100, in wait_for_server_status         raise lib_exc.TimeoutException(message)     tempest.lib.exceptions.TimeoutException: Request timed out     Details: (ImagesTestJSON:test_create_image_from_paused_server) Server 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1 failed to reach ACTIVE status and task state "None" within the required time (300 s). Server boot request ID: req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b. Current status: BUILD. Current task state: spawning. ``` Below is the list of other tempest tests failing on the same job.[2] ``` tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server[id-71bcb732-0261-11e7-9086-fa163e4fa634] tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494] tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume[id-d0f3f0d6-d9b6-4a32-8da4-23015dcab23c,volume] tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesV270Test.test_create_get_list_interfaces[id-2853f095-8277-4067-92bd-9f10bd4f8e0c,network] tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_shelved_state[id-bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad] setUpClass (tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON) tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest_v242.test_tagged_boot_devices[id-a2e65a6c-66f1-4442-aaa8-498c31778d96,image,network,slow,volume] tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_suspended_state[id-1f82ebd3-8253-4f4e-b93f-de9b7df56d8b] tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces_by_network_port[id-73fe8f02-590d-4bf1-b184-e9ca81065051,network] setUpClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSONUnderV235) ``` Here is the traceback from nova-compute logs [3], ``` 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b d5ea6c724785473b8ea1104d70fb0d14 64c7d31d84284a28bc9aaa4eaad2b9fb - default default] [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Instance failed to spawn: nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last): 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7231, in _create_guest_with_network 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] guest = self._create_guest( 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__ 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] next(self.gen) 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 479, in wait_for_instance_event 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] actual_event = event.wait() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] result = hub.switch() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] return self.greenlet.switch() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] eventlet.timeout.Timeout: 300 seconds 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] During handling of the above exception, another exception occurred: 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last): 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2640, in _build_resources 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] yield resources 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] self.driver.spawn(context, instance, image_meta, 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4193, in spawn 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] self._create_guest_with_network( 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7257, in _create_guest_with_network 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] raise exception.VirtualInterfaceCreateException() 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] ``` This job https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby is broken from 13th Mar, 2021 and earlier https://bugs.launchpad.net/tripleo/+bug/1960310 is also seen on this. Since we have two runs having same tests failures, so logging the bug for further investigation. Logs: [1]. https://logserver.rdoproject.org/17/40517/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/94e16ac/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz [2]. https://logserver.rdoproject.org/40/40440/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/6ce8796/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz [3]. https://logserver.rdoproject.org/17/40517/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/94e16ac/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz [4]. https://logserver.rdoproject.org/17/40517/1/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/94e16ac/logs/overcloud-novacompute-0/var/log/containers/nova/nova-compute.log.1.gz
2022-03-15 17:50:50 Ronelle Landy tripleo: importance High Critical
2022-03-15 17:51:00 Ronelle Landy tripleo: milestone yoga-2 yoga-3
2022-03-17 14:39:02 yatin bug added subscriber yatin
2022-04-11 13:43:08 OpenStack Infra tripleo: status Triaged In Progress
2022-05-09 14:57:47 yatin bug watch added https://bugzilla.redhat.com/show_bug.cgi?id=2081631
2022-05-27 09:00:52 Lajos Katona bug task added neutron
2022-05-27 09:01:13 Lajos Katona neutron: assignee yatin (yatinkarel)
2022-05-27 09:01:22 Lajos Katona neutron: status New In Progress
2022-05-27 09:01:26 Lajos Katona neutron: importance Undecided Critical
2022-05-27 10:53:59 yatin bug watch added https://bugzilla.redhat.com/show_bug.cgi?id=1974898
2022-05-29 12:22:04 OpenStack Infra neutron: status In Progress Fix Released
2022-05-29 12:22:07 OpenStack Infra bug watch added https://bugzilla.redhat.com/show_bug.cgi?id=2090604
2022-05-29 12:22:07 OpenStack Infra bug watch added https://bugzilla.redhat.com/show_bug.cgi?id=2037433
2022-05-30 13:46:12 OpenStack Infra tags alert promotion-blocker alert in-stable-yoga promotion-blocker
2022-05-30 13:47:13 OpenStack Infra tags alert in-stable-yoga promotion-blocker alert in-stable-xena in-stable-yoga promotion-blocker
2022-05-30 16:54:54 OpenStack Infra tags alert in-stable-xena in-stable-yoga promotion-blocker alert in-stable-wallaby in-stable-xena in-stable-yoga promotion-blocker
2022-06-02 18:25:22 OpenStack Infra tags alert in-stable-wallaby in-stable-xena in-stable-yoga promotion-blocker alert in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga promotion-blocker
2022-06-02 18:38:25 OpenStack Infra tags alert in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga promotion-blocker alert in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga promotion-blocker
2023-02-14 15:21:04 Alan Pevec tripleo: status In Progress Invalid