cinder tempest.api.compute.admin.test_volumes_negative* tempest tests failing randomly in multiple branches.

Bug #1972163 reported by Soniya Murlidhar Vyas
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
tripleo
Fix Released
Critical
Unassigned

Bug Description

periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master is failing for volume tempest tests and with error - 'Volume failed to detach from server'

Tests failing are as follows:-
1. tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)
2. tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest.test_update_attached_volume_with_nonexistent_volume_in_body
3. tearDownClass (tempest.api.volume.test_volumes_extend.VolumesExtendTest)

Traceback observed is as follows:-

i. ft1.1: tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 220, in tearDownClass
    raise value.with_traceback(trace)
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 192, in tearDownClass
    teardown()
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 602, in resource_cleanup
    raise testtools.MultipleExceptions(*cleanup_errors)
testtools.runtest.MultipleExceptions: ((<class 'tempest.lib.exceptions.BadRequest'>, Bad request
Details: {'code': 400, 'message': 'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.'}, <traceback object at 0x7f2de2dd8a80>), (<class 'tempest.lib.exceptions.TimeoutException'>, Request timed out
Details: (VolumesAdminNegativeTest:tearDownClass) Failed to delete volume 35bf5103-4f05-4a4d-b135-3a09729f87f0 within the required time (300 s). Timer started at 1652084904. Timer ended at 1652085204. Waited for 300 s., <traceback object at 0x7f2de2d75980>))

=========================================================================
Following is the metadata observed:-

2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 200, in decorated_function
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7291, in detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server do_detach_volume(context, volume_id, instance, attachment_id)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 391, in inner
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server return f(*args, **kwargs)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7288, in do_detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._detach_volume(context, bdm, instance,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7239, in _detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server driver_bdm.detach(context, instance, self.volume_api, self.driver,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 473, in detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._do_detach(context, instance, volume_api, virt_driver,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 394, in _do_detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self.driver_detach(context, instance, volume_api, virt_driver)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 326, in driver_detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server volume_api.roll_detaching(context, volume_id)
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self.force_reraise()
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server raise self.value
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 314, in driver_detach
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server virt_driver.detach_volume(context, connection_info, instance, mp,
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2686, in detach_volume
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._detach_with_retry(
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2426, in _detach_with_retry
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server self._detach_from_live_with_retry(
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2505, in _detach_from_live_with_retry
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server raise exception.DeviceDetachFailed(
2022-05-09 08:21:05.188 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR oslo_messaging.rpc.server nova.exception.DeviceDetachFailed: Device detach failed for vdb: Run out of retry while detaching device vdb with device alias virtio-disk1 from instance 9fbe0e00-2410-454e-afcf-7985eb82d925 from the live domain config. Device is still attached to the guest.

For more details please refer the following links:-

- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/tempest/stestr_results.html.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/containers/nova/nova-compute.log.txt.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/containers/nova/nova-conductor.log.txt.gz
- https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-compute-master/dd92c65/logs/undercloud/var/log/containers/nova/nova-api.log.1.gz

Changed in tripleo:
status: New → Triaged
Changed in tripleo:
milestone: none → yoga-3
tags: added: promotion-blocker
summary: - cs9 standalone full tempest api fails different failures
+ cs9 standalone full tempest api failing for volume tempest tests and
+ with error - 'Volume failed to detach from server'
description: updated
Revision history for this message
Soniya Murlidhar Vyas (svyas) wrote : Re: cs9 standalone full tempest api failing for volume tempest tests and with error - 'Volume failed to detach from server'
Ronelle Landy (rlandy)
Changed in tripleo:
milestone: yoga-3 → zed-1
importance: Undecided → Critical
summary: - cs9 standalone full tempest api failing for volume tempest tests and
- with error - 'Volume failed to detach from server'
+ Master cs9 standalone full tempest api failing for volume tempest tests
+ and with error - 'Volume failed to detach from server'
Revision history for this message
Soniya Murlidhar Vyas (svyas) wrote : Re: Master cs9 standalone full tempest api failing for volume tempest tests and with error - 'Volume failed to detach from server'
Revision history for this message
Soniya Murlidhar Vyas (svyas) wrote :
Revision history for this message
Soniya Murlidhar Vyas (svyas) wrote :

It Seems a deadlock kind of issue. Here is traceback from neutron -

2022-05-11 08:51:16.452 28 INFO neutron.wsgi [req-318ca5f8-cd3a-42f3-8a88-5eef148c3ff1 14ce8738d15c4ae1ab813854ce69012b 64c70cee9d4e427c84d5c95fdcb9b813 - default default] 192.168.24.1 "GET /v2.0/ports?tenant_id=64c70cee9d4e427c84d5c95fdcb9b813&fields=id HTTP/1.1" status: 200 len: 416 time: 0.0916076
2022-05-11 08:51:16.545 28 DEBUG neutron_lib.db.api [req-85840a9d-363d-48ee-b0a3-dba246f303b8 - - - - -] Retry wrapper got retriable exception: (pymysql.err.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')

https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-victoria/0e5448c/logs/undercloud/var/log/containers/neutron/server.log.1.gz

Revision history for this message
Miguel Lavalle (minsel) wrote :
Download full text (9.5 KiB)

TL;DR In comment #4 Soniya seems to suggest that the test failure is caused by Neutron and that must be the reason Slawek and I were pinged in IRC. However, upon careful analysis of the logs, it is clear the source of the problem is Cinder.

Now in detail, this is the analysis I conducted:

1) The failure occurs with test tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume: https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-victoria/0e5448c/logs/undercloud/var/log/tempest/stestr_results.html.gz

2) The test case fails when attaching a volume to a VM:

{3} tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume [159.919809s] ... FAILED

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/usr/lib/python3.6/site-packages/tempest/api/compute/volumes/test_attach_volume.py", line 98, in test_attach_detach_volume
        attachment = self.attach_volume(server, volume)
      File "/usr/lib/python3.6/site-packages/tempest/api/compute/base.py", line 583, in attach_volume
        self.volumes_client, volume['id'], server['id'])
      File "/usr/lib/python3.6/site-packages/tempest/common/waiters.py", line 321, in wait_for_volume_attachment_create
        attachments = client.show_volume(volume_id)['volume']['attachments']
      File "/usr/lib/python3.6/site-packages/tempest/lib/services/volume/v3/volumes_client.py", line 87, in show_volume
        resp, body = self.get(url)
      File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", line 314, in get
        return self.request('GET', url, extra_headers, headers)
      File "/usr/lib/python3.6/site-packages/tempest/lib/services/volume/base_client.py", line 40, in request
        method, url, extra_headers, headers, body, chunked)
      File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", line 703, in request
        self._error_checker(resp, resp_body)
      File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", line 884, in _error_checker
        resp=resp)
    tempest.lib.exceptions.UnexpectedResponseCode: Unexpected response code received
    Details: 503

3) The failure arises as a result of this line's execution in the test case: https://github.com/openstack/tempest/blob/a7bedbde46ae2aec796837a7e69fbf35747f75cb/tempest/api/compute/volumes/test_attach_volume.py#L98. More specifically, the test case is waiting for the volume to be attached here: https://github.com/openstack/tempest/blob/a7bedbde46ae2aec796837a7e69fbf35747f75cb/tempest/common/waiters.py#L326

4) Looking at the tempest execution log, we can see the Cinder API returned a 503 (https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-victoria/0e5448c/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz):

2022-05-11 09:01:31,841 272664 INFO [tempest.lib.common.rest_client] Request (AttachVolumeTestJSON:test_attach_detach_volume): 503 ...

Read more...

Revision history for this message
Rajat Dhasmana (whoami-rajat) wrote (last edit ):

Hi Miguel,

I agree that the failure seen is on cinder side but it's not trivial to see what's wrong. c-vol took more time (1 second) than expected by c-api (or oslo_messaging) to finish the request but this shouldn't cause a persistent failure.

c-api logs show failure at 09:01:30.321

2022-05-11 09:01:30.321 12 ERROR cinder.api.v3.attachments [req-68add028-eedf-4cd1-b535-08e92be48dd3 6c1498ede176424f807b6e58d90caa04 910ea7185d3d4bc0b6f7b85e694e1107 - default default] Unable to update the attachment.: oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 55e1880d11834915ac6be791c7747d63

c-vol logs show that the attachment-update call was finished at 09:01:31.477

2022-05-11 09:01:31.477 53 INFO cinder.volume.manager [req-68add028-eedf-4cd1-b535-08e92be48dd3 6c1498ede176424f807b6e58d90caa04 910ea7185d3d4bc0b6f7b85e694e1107 - - -] attachment_update completed successfully.

tempest logs show failure at 09:01:31,841

    2022-05-11 09:01:31,841 272664 INFO [tempest.lib.common.rest_client] Request (AttachVolumeTestJSON:test_attach_detach_volume): 503 GET http://192.168.24.3:8776/v3/910ea7185d3d4bc0b6f7b85e694e1107/volumes/6ccb4ea6-ace6-439b-8419-8fa36f3abce7 0.039s
    2022-05-11 09:01:31,841 272664 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'Openstack-Api-Version': 'volume 3.0', 'X-Auth-Token': '<omitted>'}
            Body: None
        Response - Headers: {'cache-control': 'no-cache', 'connection': 'close', 'content-type': 'text/html', 'status': '503', 'content-location': 'http://192.168.24.3:8776/v3/910ea7185d3d4bc0b6f7b85e694e1107/volumes/6ccb4ea6-ace6-439b-8419-8fa36f3abce7'}
            Body: b'<html><body><h1>503 Service Unavailable</h1>\nNo server is available to handle this request.\n</body></html>\n'

c-api logs: https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-victoria/0e5448c/logs/undercloud/var/log/containers/cinder/cinder-api.log.txt.gz

c-vol logs: https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-victoria/0e5448c/logs/undercloud/var/log/containers/cinder/cinder-volume.log.txt.gz

tempest logs: https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-victoria/0e5448c/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz

Revision history for this message
Sandeep Yadav (sandeepyadav93) wrote :
Revision history for this message
Ronelle Landy (rlandy) wrote :

This is showing up in train now (last two runs):

https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-full-tempest-api-train&pipeline=openstack-periodic-integration-stable4&skip=0

https://logserver.rdoproject.org/openstack-periodic-integration-stable4/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-train/4d0becb/logs/undercloud/var/log/tempest/stestr_results.html.gz

https://logserver.rdoproject.org/openstack-periodic-integration-stable4/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-train/4d0becb/logs/undercloud/var/log/extra/podman/containers/cinder_api/log/cinder/cinder-api.log.txt.gz

https://logserver.rdoproject.org/openstack-periodic-integration-stable4/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-api-train/4d0becb/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/usr/lib/python3.6/site-packages/tempest/test.py", line 236, in tearDownClass
        six.reraise(etype, value, trace)
      File "/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise
        raise value
      File "/usr/lib/python3.6/site-packages/tempest/test.py", line 208, in tearDownClass
        teardown()
      File "/usr/lib/python3.6/site-packages/tempest/test.py", line 590, in resource_cleanup
        raise testtools.MultipleExceptions(*cleanup_errors)
    testtools.runtest.MultipleExceptions: ((<class 'tempest.lib.exceptions.BadRequest'>, Bad request
    Details: {'code': 400, 'message': 'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.'}, <traceback object at 0x7f0d2a45bb08>), (<class 'tempest.lib.exceptions.TimeoutException'>, Request timed out
    Details: (AttachVolumeNegativeTest:tearDownClass) Failed to delete volume 1c434a12-3ff2-4bf4-9587-2d4c095b842e within the required time (300 s)., <traceback object at 0x7f0d2a5c40c8>))

Revision history for this message
Sandeep Yadav (sandeepyadav93) wrote :
Changed in tripleo:
status: Triaged → Invalid
Revision history for this message
Sandeep Yadav (sandeepyadav93) wrote :

We again noticed this issue in train branch:-

~~~
tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest.test_update_attached_volume_with_nonexistent_volume_in_body[id-7dcac15a-b107-46d3-a5f6-cb863f4e454a,negative]
tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)
~~~

https://logserver.rdoproject.org/60/39960/45/check/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-train/3f39640/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz

https://logserver.rdoproject.org/60/39960/44/check/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-train/7666b3a/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz

Changed in tripleo:
status: Invalid → Triaged
Revision history for this message
Sandeep Yadav (sandeepyadav93) wrote :

https://logserver.rdoproject.org/60/39960/45/check/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-train/3f39640/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz

~~~
==============================
Failed 2 tests - output below:
==============================

tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest.test_update_attached_volume_with_nonexistent_volume_in_body[id-7dcac15a-b107-46d3-a5f6-cb863f4e454a,negative]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/usr/lib/python3.6/site-packages/tempest/common/waiters.py", line 288, in wait_for_volume_resource_status
        raise lib_exc.TimeoutException(message)
    tempest.lib.exceptions.TimeoutException: Request timed out
    Details: volume b3c8ca9a-175f-4eb4-9500-bc41fc741f4a failed to reach available status (current in-use) within the required time (300 s).

~~~

~~~
tearDownClass (tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest)
----------------------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/usr/lib/python3.6/site-packages/tempest/test.py", line 236, in tearDownClass
        six.reraise(etype, value, trace)
      File "/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise
        raise value
      File "/usr/lib/python3.6/site-packages/tempest/test.py", line 208, in tearDownClass
        teardown()
      File "/usr/lib/python3.6/site-packages/tempest/test.py", line 590, in resource_cleanup
        raise testtools.MultipleExceptions(*cleanup_errors)
    testtools.runtest.MultipleExceptions: ((<class 'tempest.lib.exceptions.BadRequest'>, Bad request
    Details: {'code': 400, 'message': 'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.'}, <traceback object at 0x7f486db9f7c8>), (<class 'tempest.lib.exceptions.TimeoutException'>, Request timed out
    Details: (VolumesAdminNegativeTest:tearDownClass) Failed to delete volume b3c8ca9a-175f-4eb4-9500-bc41fc741f4a within the required time (300 s)., <traceback object at 0x7f486f438d08>))
~~~

summary: - Master cs9 standalone full tempest api failing for volume tempest tests
- and with error - 'Volume failed to detach from server'
+ cinder tempest.api.compute.admin.test_volumes_negative* tempest tests
+ failing randomly in multiple branches.
Revision history for this message
Marios Andreou (marios-b) wrote :
Revision history for this message
Marios Andreou (marios-b) wrote :

we are wondering if the issue discussed at [1] is related and in particular that the patch series at [2] will help

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2012096
[2] https://review.opendev.org/q/topic:wait_until_sshable_pingable

Revision history for this message
Soniya Murlidhar Vyas (svyas) wrote :

We are not seeing this issue on train branch, the job health seems pretty much green[1]

https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-full-tempest-api-compute-train&skip=0

Ronelle Landy (rlandy)
Changed in tripleo:
status: Triaged → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.