Need to increase time-out for ostf ha test after step power on destroyed node

Bug #1544958 reported by Tatyanka
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Fix Released
High
Fuel QA Team
8.0.x
Fix Released
High
Fuel QA Team

Bug Description

Steps:
Check 3 in 1 rabbit failover

Scenario:
1. SSH to controller and get rabbit master
2. Destroy not rabbit master node
3. Check that rabbit master stay as was
4. Run ostf ha
5. Turn on destroyed slave
6. Check rabbit master is the same
7. Run ostf ha ========================== Fail here
8. Destroy rabbit master node
9. Check that new rabbit-master appears
10. Run ostf ha
11. Power on destroyed node
12. Check that new rabbit-master was not elected
13. Run ostf ha

Failed 1 OSTF tests; should fail 0 tests. Names of failed tests:
  - Check state of haproxy backends on controllers (failure) Some haproxy backend has down state.. Please refer to OpenStack logs for more details.

After revert I got the same situation, some service marked as done on node-2 (that was destroyed and start)
http://paste.openstack.org/show/486805/
I wait ten seconds, run the status check again and it is passed:
http://paste.openstack.org/show/486806/

Tags: area-qa
Revision history for this message
Tatyanka (tatyana-leontovich) wrote :
summary: - Need increase timaout for ostf ha test after step power on destroyed
+ Need increase timeout for ostf ha test after step power on destroyed
node
summary: - Need increase timeout for ostf ha test after step power on destroyed
+ Need to increase time-out for ostf ha test after step power on destroyed
node
Ilya Kutukov (ikutukov)
Changed in fuel:
status: New → Confirmed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to fuel-qa (master)

Fix proposed to branch: master
Review: https://review.openstack.org/281157

Changed in fuel:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to fuel-qa (stable/8.0)

Fix proposed to branch: stable/8.0
Review: https://review.openstack.org/281709

Changed in fuel:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to fuel-qa (master)

Reviewed: https://review.openstack.org/281157
Committed: https://git.openstack.org/cgit/openstack/fuel-qa/commit/?id=1ff24db77580ea761b80dd84311a17e447ad4f96
Submitter: Jenkins
Branch: master

commit 1ff24db77580ea761b80dd84311a17e447ad4f96
Author: Tatyana Leontovich <email address hidden>
Date: Tue Feb 16 14:37:54 2016 +0200

    Increase timeouts to 800 sec for ha service checks

    Sometimes after destructive action 600 sec is not enaugh,
    so set timeouts to 800 sec, to be sure that we are not
    affected by slow environments

    Change-Id: Ifcd59cedf2a38c40d9f73d4e509c1b2c4d2d66bc
    Closes-Bug: #1544958

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to fuel-qa (stable/8.0)

Reviewed: https://review.openstack.org/281709
Committed: https://git.openstack.org/cgit/openstack/fuel-qa/commit/?id=f302523d5ef71f658bfe150f58469e17fed718e3
Submitter: Jenkins
Branch: stable/8.0

commit f302523d5ef71f658bfe150f58469e17fed718e3
Author: Tatyana Leontovich <email address hidden>
Date: Tue Feb 16 14:37:54 2016 +0200

    Increase timeouts to 800 sec for ha service checks

    Sometimes after destructive action 600 sec is not enaugh,
    so set timeouts to 800 sec, to be sure that we are not
    affected by slow environments

    Change-Id: Ifcd59cedf2a38c40d9f73d4e509c1b2c4d2d66bc
    Closes-Bug: #1544958

Revision history for this message
Artem Panchenko (apanchenko-8) wrote :

Test '143_ha_neutron_destructive_on_8.0-574' passed on swarm, version 143_ha_neutron_destructive_on_8.0-574. It failed again on the newest 8.0 iso but with another error. Closing this bug as fix-released.

Changed in fuel:
status: Fix Committed → Fix Released
tags: removed: non-release
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.