2015-01-28 16:17:26 |
Dmitry Tyzhnenko |
description |
We have:
Scenario:
1. Create cluster
2. Add 3 nodes with controller and ceph OSD roles
3. Add 1 node with ceph OSD roles
4. Add 2 nodes with compute and ceph OSD roles
5. Deploy the cluster
6. Check ceph status
7. Cold retsart
8. Check ceph status
Snapshot ceph_ha
We do:
Scenario:
1. Revert from ceph_ha
2. Waiting up galera and cinder
3. Check ceph status
4. Run OSTF
5. Destroy osd-node
6. Check ceph status
7. Run OSTF
8. Destroy one compute node
9. Check ceph status
10. Run OSTF
11. Cold restart
12. Waiting up galera and cinder
13. Run single OSTF - Create volume and attach it to instance
14. Run OSTF
Snapshot ceph_ha |
We have wrong scenario in ceph_ha_restart test
https://github.com/stackforge/fuel-main/blob/6e1a258c3caaf1ecc8a2314b1f6623f0c28b5896/fuelweb_test/tests/tests_strength/test_restart.py#L85
We have:
Scenario:
1. Create cluster
2. Add 3 nodes with controller and ceph OSD roles
3. Add 1 node with ceph OSD roles
4. Add 2 nodes with compute and ceph OSD roles
5. Deploy the cluster
6. Check ceph status
7. Cold retsart
8. Check ceph status
Snapshot ceph_ha
We do:
Scenario:
1. Revert from ceph_ha
2. Waiting up galera and cinder
3. Check ceph status
4. Run OSTF
5. Destroy osd-node
6. Check ceph status
7. Run OSTF
8. Destroy one compute node
9. Check ceph status
10. Run OSTF
11. Cold restart
12. Waiting up galera and cinder
13. Run single OSTF - Create volume and attach it to instance
14. Run OSTF
Snapshot ceph_ha |
|