Scenario:
1. Revert snapshot 'prepare_load_ceph_ha'
2. Wait until MySQL Galera is UP on some controller
3. Check Ceph status
4. Run ostf
5. Fill ceph partitions on all nodes up to 30%
6. Check Ceph status
7. Disable UMM
8. Run RALLY
9. 100 times repetitive reboot:
10. Cold restart of all nodes
11. Wait for HA services ready
12. Wait until MySQL Galera is UP on some controller
13. Run ostf <<< failed here
>>>
2016-12-27T01:22:44.400903+00:00 err: ERROR: p_mysqld: check_if_galera_pc(): But I'm running a new cluster, PID:12845, this is a split-brain!
2016-12-27T01:22:44.405356+00:00 err: ERROR: p_mysqld: mysql_monitor(): I'm a master, and my GTID: c27b16c7-cbd2-11e6-97b3-03ad09eb7038:0, which was not e
xpected
The same issue on 9.2 snapshot #684: /product- ci.infra. mirantis. net/job/ 9.x.system_ test.ubuntu. repetitive_ restart/ 158/testReport/ (root)/ ceph_partitions _repetitive_ cold_restart/ ceph_partitions _repetitive_ cold_restart/
https:/
Scenario: load_ceph_ ha'
1. Revert snapshot 'prepare_
2. Wait until MySQL Galera is UP on some controller
3. Check Ceph status
4. Run ostf
5. Fill ceph partitions on all nodes up to 30%
6. Check Ceph status
7. Disable UMM
8. Run RALLY
9. 100 times repetitive reboot:
10. Cold restart of all nodes
11. Wait for HA services ready
12. Wait until MySQL Galera is UP on some controller
13. Run ostf <<< failed here
>>> 27T01:22: 44.400903+ 00:00 err: ERROR: p_mysqld: check_if_ galera_ pc(): But I'm running a new cluster, PID:12845, this is a split-brain! 27T01:22: 44.405356+ 00:00 err: ERROR: p_mysqld: mysql_monitor(): I'm a master, and my GTID: c27b16c7- cbd2-11e6- 97b3-03ad09eb70 38:0, which was not e
2016-12-
2016-12-
xpected