2022-05-09 10:08:16 |
Soniya Murlidhar Vyas |
description |
periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master is failing for volume tempest tests and with error - 'Volume failed to detach from server'
Tests failing are as follows:-
1. tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)
2. tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest.test_update_attached_volume_with_nonexistent_volume_in_body
3. tearDownClass (tempest.api.volume.test_volumes_extend.VolumesExtendTest)
Traceback observed is as follows:-
i. ft1.1: tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)testtools.testresult.real._StringException: Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 220, in tearDownClass
raise value.with_traceback(trace)
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 192, in tearDownClass
teardown()
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 602, in resource_cleanup
raise testtools.MultipleExceptions(*cleanup_errors)
testtools.runtest.MultipleExceptions: ((<class 'tempest.lib.exceptions.BadRequest'>, Bad request
Details: {'code': 400, 'message': 'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.'}, <traceback object at 0x7f2de2dd8a80>), (<class 'tempest.lib.exceptions.TimeoutException'>, Request timed out
Details: (VolumesAdminNegativeTest:tearDownClass) Failed to delete volume 35bf5103-4f05-4a4d-b135-3a09729f87f0 within the required time (300 s). Timer started at 1652084904. Timer ended at 1652085204. Waited for 300 s., <traceback object at 0x7f2de2d75980>))
ii. traceback-1: {{{
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 317, in wait_for_volume_resource_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: volume 35bf5103-4f05-4a4d-b135-3a09729f87f0 failed to reach available status (current in-use) within the required time (300 s).
}}}
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 385, in wait_for_volume_attachment_remove_from_server
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: Volume 35bf5103-4f05-4a4d-b135-3a09729f87f0 failed to detach from server 9fbe0e00-2410-454e-afcf-7985eb82d925 within the required time (300 s) from the compute API perspective
iii.
ft1.1: tearDownClass (tempest.api.volume.test_volumes_extend.VolumesExtendTest)testtools.testresult.real._StringException: Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 220, in tearDownClass
raise value.with_traceback(trace)
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 192, in tearDownClass
teardown()
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 602, in resource_cleanup
raise testtools.MultipleExceptions(*cleanup_errors)
testtools.runtest.MultipleExceptions: ((<class 'tempest.lib.exceptions.TimeoutException'>, Request timed out
Details: (VolumesExtendTest:tearDownClass) Failed to delete volume-snapshot 38317d36-16af-461b-bc11-169f434e372a within the required time (300 s). Timer started at 1652001084. Timer ended at 1652001384. Waited for 300 s., <traceback object at 0x7f892b25b6c0>), (<class 'tempest.lib.exceptions.BadRequest'>, Bad request
Details: {'code': 400, 'message': 'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.'}, <traceback object at 0x7f892b254540>))
For more details please refer the following links:-
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/tempest/stestr_results.html.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/6be7788/logs/undercloud/var/log/tempest/stestr_results.html.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/e61bb38/logs/undercloud/var/log/tempest/stestr_results.html.gz |
periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master is failing for volume tempest tests and with error - 'Volume failed to detach from server'
Tests failing are as follows:-
1. tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)
2. tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest.test_update_attached_volume_with_nonexistent_volume_in_body
3. tearDownClass (tempest.api.volume.test_volumes_extend.VolumesExtendTest)
Traceback observed is as follows:-
i. ft1.1: tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)testtools.testresult.real._StringException: Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 220, in tearDownClass
raise value.with_traceback(trace)
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 192, in tearDownClass
teardown()
File "/usr/lib/python3.9/site-packages/tempest/test.py", line 602, in resource_cleanup
raise testtools.MultipleExceptions(*cleanup_errors)
testtools.runtest.MultipleExceptions: ((<class 'tempest.lib.exceptions.BadRequest'>, Bad request
Details: {'code': 400, 'message': 'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.'}, <traceback object at 0x7f2de2dd8a80>), (<class 'tempest.lib.exceptions.TimeoutException'>, Request timed out
Details: (VolumesAdminNegativeTest:tearDownClass) Failed to delete volume 35bf5103-4f05-4a4d-b135-3a09729f87f0 within the required time (300 s). Timer started at 1652084904. Timer ended at 1652085204. Waited for 300 s., <traceback object at 0x7f2de2d75980>))
=========================================================================
Following is the metadata observed:-
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 200, in decorated_function
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7291, in detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server do_detach_volume(context, volume_id, instance, attachment_id)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 391, in inner
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server return f(*args, **kwargs)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7288, in do_detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._detach_volume(context, bdm, instance,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7239, in _detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server driver_bdm.detach(context, instance, self.volume_api, self.driver,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 473, in detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._do_detach(context, instance, volume_api, virt_driver,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 394, in _do_detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self.driver_detach(context, instance, volume_api, virt_driver)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 326, in driver_detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server volume_api.roll_detaching(context, volume_id)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self.force_reraise()
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server raise self.value
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 314, in driver_detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server virt_driver.detach_volume(context, connection_info, instance, mp,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2686, in detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._detach_with_retry(
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2426, in _detach_with_retry
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._detach_from_live_with_retry(
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2505, in _detach_from_live_with_retry
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server raise exception.DeviceDetachFailed(
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server nova.exception.DeviceDetachFailed: Device detach failed for vdb: Run out of retry while detaching device vdb with device alias virtio-disk1 from instance 9fbe0e00-2410-454e-afcf-7985eb82d925 from the live domain config. Device is still attached to the guest.
For more details please refer the following links:-
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/tempest/stestr_results.html.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/containers/nova/nova-compute.log.txt.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/containers/nova/nova-conductor.log.txt.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/containers/nova/nova-api.log.1.gz |
|