tempest test failed test_create_ebs_image_and_check_boot

Bug #1520296 reported by Maksym Shalamov
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
New
Undecided
Unassigned
devstack-plugin-ceph
New
Undecided
Unassigned
tempest
New
Undecided
Unassigned

Bug Description

When running tempest tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_create_ebs_image_and_check_boot fails with the "Invalid volume: Volume still has 1 dependent snapshots." error.

From the console log:
 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_create_ebs_image_and_check_boot[compute,id-36c34c67-7b54-4b59-b188-02a2f458a63b,image,volume]
------------------------------------------------------------------------------------------------------------------------------------------------------------------

Captured traceback-2:
~~~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
       File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 791, in wait_for_resource_deletion
         raise exceptions.TimeoutException(message)
     tempest_lib.exceptions.TimeoutException: Request timed out
     Details: (TestVolumeBootPattern:_run_cleanups) Failed to delete volume 973e3dd4-148d-4969-9eee-ffe5daa40d44 within the required time (196 s).

 Captured traceback-1:
 ~~~~~~~~~~~~~~~~~~~~~
     Traceback (most recent call last):
       File "tempest/scenario/manager.py", line 101, in delete_wrapper
         delete_thing(*args, **kwargs)
       File "tempest/services/volume/json/volumes_client.py", line 108, in delete_volume
         resp, body = self.delete("volumes/%s" % str(volume_id))
       File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 290, in delete
        return self.request('DELETE', url, extra_headers, headers, body)
       File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 639, in request
         resp, resp_body)
       File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 697, in _error_checker
         raise exceptions.BadRequest(resp_body, resp=resp)
     tempest_lib.exceptions.BadRequest: Bad request
     Details: {u'code': 400, u'message': u'Invalid volume: Volume still has 1 dependent snapshots.'}

Link on logs:
http://logs.openstack.org/55/245855/1/check/gate-tempest-dsvm-full-ceph/309fe31/

Copy of log file into attachment

Revision history for this message
Maksym Shalamov (mshalamov) wrote :
Revision history for this message
Maksym Shalamov (mshalamov) wrote :

Error found on devstack(branch stable/kilo)

Revision history for this message
Matt Riedemann (mriedem) wrote :
Changed in cinder:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
Matt Riedemann (mriedem) wrote :

Since this started failing on all branches in the last 24 hours, there could have been a regression in tempest.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Actually, this looks like the prime suspect, it's the only global ceph change I can find: https://review.openstack.org/#/c/251421/

Changed in cinder:
status: Confirmed → New
importance: High → Undecided
Revision history for this message
Deepak C Shetty (dpkshetty) wrote :

@Matt,
    https://review.openstack.org/#/c/251421/ just adds a new CI job (gate-tempest-dsvm-full-devstack-plugin-ceph) for
ceph which uses the devstack plugin model, doesn't touch/affect the existing CI ceph job (gate-tempest-dsvm-full-ceph, which uses the hooks based approach), so failures in gate-tempest-dsvm-full-ceph job has nothing to do with 251421

FWIW, I saw this error for the new CI job as well

See

http://logs.openstack.org/09/253009/1/check/gate-tempest-dsvm-full-devstack-plugin-ceph-nv/6cadba8/console.html.gz#_2015-12-04_11_47_13_962

After a recheck (See patch https://review.openstack.org/#/c/253009/ ) the same CI job passed fine. See

http://logs.openstack.org/09/253009/1/check/gate-tempest-dsvm-full-devstack-plugin-ceph-nv/efe84c5/console.html.gz#_2015-12-04_13_54_24_071

So the failure in test_create_ebs_image_and_check_boot seems intermittent (thus racy ? probably)

thanx,
deepak

Revision history for this message
Joseph Lanoux (joseph-lanoux) wrote :

Matt,
I don't see why that should be the cause of this failure but this was merged a few hours before the test started to fail and it involves a change in the server creation of the test: https://review.openstack.org/#/c/225575/.
Joseph

Revision history for this message
Matt Riedemann (mriedem) wrote :

This is partly a duplicate of bug 1464259 but that bug also includes ec2 volume related test failures (I'm not sure if they are the same root cause), but this one is specifically for the ebs test failures in test_volume_boot_pattern.py.

Revision history for this message
Matt Riedemann (mriedem) wrote :

This might eventually help with the races here since snapshot with ceph should be faster:

https://review.openstack.org/#/c/205282/

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.