ceph_ha_restart failed on the step of checking Ceph HEALTH

Bug #1452732 reported by Egor Kotko
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Triaged
High
Fuel QA Team

Bug Description

{"build_id": "2015-05-06_22-31-58", "build_number": "386", "release_versions": {"2014.2.2-6.1": {"VERSION": {"build_id": "2015-05-06_22-31-58", "build_number": "386", "api": "1.0", "fuel-library_sha": "a7a794e68086014f8163f79a3dad1c9a074b1f3a", "nailgun_sha": "7437e500c3e2673b124af4c7634ac04121b1c949", "feature_groups": ["mirantis"], "openstack_version": "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha": "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha": "83b23783edbfacd25aded7a79e0aca17fc883e79", "fuel-ostf_sha": "1a09c09ee7f6b7622dfc7e37dce70c4537c0b532", "release": "6.1", "fuelmain_sha": "68e6f08e23845516e1995df59e038ebecb26e6b8"}}}, "auth_required": true, "api": "1.0", "fuel-library_sha": "a7a794e68086014f8163f79a3dad1c9a074b1f3a", "nailgun_sha": "7437e500c3e2673b124af4c7634ac04121b1c949", "feature_groups": ["mirantis"], "openstack_version": "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha": "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha": "83b23783edbfacd25aded7a79e0aca17fc883e79", "fuel-ostf_sha": "1a09c09ee7f6b7622dfc7e37dce70c4537c0b532", "release": "6.1", "fuelmain_sha": "68e6f08e23845516e1995df59e038ebecb26e6b8"}

Ceph_ha_restart failed on the step of checking Ceph HEALTH
Scenario:
            1. Revert from ceph_ha
            2. Waiting up galera and cinder
            3. Check ceph status
            4. Run OSTF
            5. Destroy osd-node
            6. Check ceph status
            7. Run OSTF
            8. Destroy one compute node
            9. Check ceph status
            10. Run OSTF
            11. Cold restart
            12. Waiting up galera and cinder
            13. Run single OSTF - Create volume and attach it to instance
            14. Run OSTF

Test Result:
http://jenkins-product.srt.mirantis.net:8080/view/6.1_swarm/job/6.1.system_test.ubuntu.thread_3/117/testReport/%28root%29
/ceph_ha_restart/ceph_ha_restart/

The output of checking the health manually:
root@node-1:~# ceph --watch-debug health
    cluster eadbb637-a3b1-41de-8d38-2a78ad2a9506
     health HEALTH_WARN clock skew detected on mon.node-2, mon.node-3
     monmap e3: 3 mons at {node-1=10.109.4.4:6789/0,node-2=10.109.4.5:6789/0,node-3=10.109.4.6:6789/0}, election epoch 16, quorum 0,1,2 node-1,node-2,node-3
     osdmap e80: 12 osds: 12 up, 12 in
      pgmap v180: 4288 pgs, 7 pools, 13696 kB data, 5 objects
            25135 MB used, 568 GB / 592 GB avail
                4288 active+clean

2015-05-07 13:14:28.806306 mon.0 [DBG] osd.4 10.109.4.5:6803/29182 failure report canceled by osd.2 10.109.4.5:6800/28380

tags: added: non-release
Changed in fuel:
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.