Comment 0 for bug 1753640

Revision history for this message
Nobuto Murata (nobuto) wrote :

Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.

xenial + cloud:xenial-pike

$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)

$ sudo ceph health detail
sudo: unable to resolve host juju-3ea7d7-1
HEALTH_WARN application not enabled on 4 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 4 pool(s)
    application not enabled on pool 'default.rgw.control'
    application not enabled on pool '.rgw.root'
    application not enabled on pool 'glance'
    application not enabled on pool 'cinder-ceph'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.

http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application