I was wrong about 2 OSDs: Ceph cluster is healthy and operational with just 1 OSD. RadosGW region map was created successfully (no ERROR lines in radosgw.log about it), swift CLI works fine:
[root@node-5 ~]# swift post test
[root@node-5 ~]# swift list
test
Whatever was the reason for RadosGW throwing error 500, it looks like it's related to region-map, but has nothing to do with the number of deployed OSDs. I think this bug should stay "Incomplete" until we have more information on how to reproduce it.
I couldn't reproduce this in the following configuration: 1x controller, 1x compute + ceph-osd, CentOS, Neutron/GRE, storage settings:
storage:
images_ceph: true
osd_pool_size: "1"
objects_ceph: true
volumes_ceph: true
ephemeral_ceph: true
volumes_lvm: false
I was wrong about 2 OSDs: Ceph cluster is healthy and operational with just 1 OSD. RadosGW region map was created successfully (no ERROR lines in radosgw.log about it), swift CLI works fine:
[root@node-5 ~]# swift post test
[root@node-5 ~]# swift list
test
Whatever was the reason for RadosGW throwing error 500, it looks like it's related to region-map, but has nothing to do with the number of deployed OSDs. I think this bug should stay "Incomplete" until we have more information on how to reproduce it.