I have a HA configuration with 3 controllers + 1 compute and doing some tests, eg. when I shutdown the first Controller, and try to create a new volume with cinder it stays in status "creating"....
I only use ceph for cinder, for glance I use swift. Uploading an image to glance it works fine.
[root@node-15 ~]# ceph health
^CError connecting to cluster: Error
[root@node-15 ~]# ceph mon stat
^CError connecting to cluster: Error
I change the following lines in the /etc/ceph/ceph.conf file on each controller node:
from:
[global]
filestore_xattr_use_omap = true
mon_host = 192.168.0.3
fsid = b963be07-edcb-4f65-a661-d005844f9332
mon_initial_members = node-9
auth_supported = cephx
osd_journal_size = 2048
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
public_network = 192.168.0.0/24
osd_pool_default_pgp_num = 100
osd_mkfs_type = xfs
cluster_network = 192.168.1.0/24
to:
[global]
filestore_xattr_use_omap = true
mon_host = 192.168.0.3 192.168.0.4 192.168.0.5 # in this line you need to add IP of all your controller node
fsid = b963be07-edcb-4f65-a661-d005844f9332
mon_initial_members = node-9 node-10 node-11 # in this line you need to add hostname of all your controller node
auth_supported = cephx
osd_journal_size = 2048
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
public_network = 192.168.0.0/24
osd_pool_default_pgp_num = 100
osd_mkfs_type = xfs
cluster_network = 192.168.1.0/24
and ceph start to work.
I d like to overcome with this bug ( an the one described in mine https:/ /bugs.launchpad .net/bugs/ 1267937).
Is therefore safe to apply the described workaround in 4.0?
mon_host = 192.168.0.3 192.168.0.4 192.168.0.5
mon_initial_members = node-9 node-10 node-11
it s needed to restart any ceph process after modfying ceph.conf??
thanks