Comment 12 for bug 1656116

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Thanks for the added info, I believe that will be enough for the storage charm folks to triage.

## To summarize log spelunking:
3 ceph-mons in containers
6 ceph-osds
xenial-mitaka
Juju 2.1-beta4

## ceph-mon syslog says:
Jan 11 20:28:33 juju-bd739b-5-lxd-5 systemd[1]: Started Ceph cluster monitor daemon.
Jan 11 20:28:34 juju-bd739b-5-lxd-5 ceph-mon[61101]: 2017-01-11 20:28:34.021764 7f3dc8945580 -1 did not load config file, using default settings.
Jan 11 20:28:34 juju-bd739b-5-lxd-5 ceph-mon[61101]: monitor data directory at '/var/lib/ceph/mon/ceph-juju-bd739b-5-lxd-5' does not exist: have you run 'mkfs'?
Jan 11 20:28:34 juju-bd739b-5-lxd-5 systemd[1]: ceph-mon.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 20:28:34 juju-bd739b-5-lxd-5 systemd[1]: ceph-mon.service: Unit entered failed state.
Jan 11 20:28:34 juju-bd739b-5-lxd-5 systemd[1]: ceph-mon.service: Failed with result 'exit-code'.

And with that, I'll turn it over to a ceph charm SME. :-)