srv17: http://jenkins-product.srt.mirantis.net:8080/view/0_master_swarm/job/master_fuelmain.system_test.ubuntu.thread_4/155/ The same connectivity problems between nodes
srv18 http://jenkins-product.srt.mirantis.net:8080/view/0_master_swarm/job/master_fuelmain.system_test.ubuntu.thread_3/155/ The same problem, slave-05 and slave-06 went offline during the test.
Also, after snapshot 'error_ceph_ha_restart' was reverted, slave-01 was lost by fuel for few minutes :
[root@nailgun ~]# fuel node id | status | name | cluster | ip | mac | roles | pending_roles | online ---|--------|------------------------------|---------|-------------|-------------------|----------------------|---------------|------- 1 | ready | slave-01_controller_ceph-osd | 1 | 10.108.10.3 | 64:3a:93:52:e6:58 | ceph-osd, controller | | False 3 | ready | slave-04_compute_ceph-osd | 1 | 10.108.10.5 | 64:65:38:57:6f:4d | ceph-osd, compute | | True 4 | ready | slave-02_controller_ceph-osd | 1 | 10.108.10.6 | 64:b4:14:db:da:3b | ceph-osd, controller | | True 2 | ready | slave-03_controller_ceph-osd | 1 | 10.108.10.4 | 64:df:cb:b6:9a:24 | ceph-osd, controller | | True 5 | ready | slave-05_compute_ceph-osd | 1 | 10.108.10.7 | 64:db:35:22:df:ce | ceph-osd, compute | | False 6 | ready | slave-06_ceph-osd | 1 | 10.108.10.8 | 64:ca:0d:fa:7f:79 | ceph-osd | | False
But node-01 is online at that moment: root@node-1:~# uptime 04:25:04 up 23 min, 1 user, load average: 0.51, 0.63, 0.79
srv17: jenkins- product. srt.mirantis. net:8080/ view/0_ master_ swarm/job/ master_ fuelmain. system_ test.ubuntu. thread_ 4/155/
http://
The same connectivity problems between nodes
srv18 jenkins- product. srt.mirantis. net:8080/ view/0_ master_ swarm/job/ master_ fuelmain. system_ test.ubuntu. thread_ 3/155/
http://
The same problem, slave-05 and slave-06 went offline during the test.
Also, after snapshot 'error_ ceph_ha_ restart' was reverted, slave-01 was lost by fuel for few minutes :
[root@nailgun ~]# fuel node -----|- ------- ------- ------- ------- -|----- ----|-- ------- ----|-- ------- ------- ---|--- ------- ------- -----|- ------- ------- |------ - controller_ ceph-osd | 1 | 10.108.10.3 | 64:3a:93:52:e6:58 | ceph-osd, controller | | False compute_ ceph-osd | 1 | 10.108.10.5 | 64:65:38:57:6f:4d | ceph-osd, compute | | True controller_ ceph-osd | 1 | 10.108.10.6 | 64:b4:14:db:da:3b | ceph-osd, controller | | True controller_ ceph-osd | 1 | 10.108.10.4 | 64:df:cb:b6:9a:24 | ceph-osd, controller | | True compute_ ceph-osd | 1 | 10.108.10.7 | 64:db:35:22:df:ce | ceph-osd, compute | | False
id | status | name | cluster | ip | mac | roles | pending_roles | online
---|---
1 | ready | slave-01_
3 | ready | slave-04_
4 | ready | slave-02_
2 | ready | slave-03_
5 | ready | slave-05_
6 | ready | slave-06_ceph-osd | 1 | 10.108.10.8 | 64:ca:0d:fa:7f:79 | ceph-osd | | False
But node-01 is online at that moment:
root@node-1:~# uptime
04:25:04 up 23 min, 1 user, load average: 0.51, 0.63, 0.79