Comment 2 for bug 1604342

Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote : Re: Failed to create volumes in CephRadosGW cluster configuration

The logs (in particular, node-1/commands/ceph_s.txt) indicate that ceph cluster is OK:

[10.109.0.4] out: cluster b64e046f-653f-4e95-848e-1794b8298e98
[10.109.0.4] out: health HEALTH_WARN
[10.109.0.4] out: too many PGs per OSD (352 > max 300)
[10.109.0.4] out: monmap e3: 3 mons at {node-1=10.109.2.3:6789/0,node-4=10.109.2.2:6789/0,node-5=10.109.2.5:6789/0}
[10.109.0.4] out: election epoch 8, quorum 0,1,2 node-4,node-1,node-5
[10.109.0.4] out: osdmap e33: 6 osds: 6 up, 6 in
[10.109.0.4] out: pgmap v101: 704 pgs, 10 pools, 22052 kB data, 52 objects
[10.109.0.4] out: 12727 MB used, 283 GB / 296 GB avail
[10.109.0.4] out: 704 active+clean
[10.109.0.4] out:

Also there's nothing unusual in OSDs' logs (node-2/var/log/ceph/ceph-osd.{1,4}.log, node-3/var/log/ceph/ceph-osd.{0,3}.log, etc), same for the monitors.

Last but not least just because the log file is called "fail_error_ceph_radosgw-blah-blah.tar.gz" does NOT mean the problem has something to do with ceph