HEALTH_WARN with POOL_APP_NOT_ENABLED application not enabled on N pool(s)

Bug #1753640 reported by Nobuto Murata
40
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Ceph Monitor Charm
Fix Released
Medium
Chris MacNaughton
Ceph RADOS Gateway Charm
Triaged
Low
Unassigned
OpenStack Ceph-FS Charm
Triaged
Low
Unassigned
OpenStack Cinder-Ceph charm
Triaged
Low
Unassigned
OpenStack Glance Charm
Triaged
Low
Unassigned
OpenStack Nova Compute Charm
Triaged
Low
Unassigned
charms.ceph
Fix Released
Medium
Chris MacNaughton

Bug Description

Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.

xenial + cloud:xenial-pike

$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)

$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
    application not enabled on pool 'default.rgw.control'
    application not enabled on pool '.rgw.root'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.

http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application

[WORKAROUND]

$ juju run --unit ceph-mon/0 '
    ceph osd pool application enable default.rgw.control rgw
    ceph osd pool application enable default.rgw.buckets.index rgw
    ceph osd pool application enable .rgw.root rgw
    ceph osd pool application enable glance rbd
    ceph osd pool application enable cinder-ceph rbd
    ceph osd pool application enable gnocchi rgw
'

Revision history for this message
Nobuto Murata (nobuto) wrote :

Ah, openstack-on-lxd uses ceph charm instead of ceph-mon charm actually. it may not be the case with ceph-mon. I will double check with it.

tags: added: cpe-onsite
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

According to https://ceph.com/community/new-luminous-pool-tags/ , there is a new step following pool creation that we'll need to add support for: `rbd pool init <pool-name>`

Further reading of that page suggests that, despite the scary warning, the absence of app tags doesn't affect functionality at all.

Nobuto Murata (nobuto)
description: updated
Revision history for this message
Nobuto Murata (nobuto) wrote :

After trying ceph-mon/next charm, the number of affected pools are decreased. It looks like rgw related so adding ceph-radosgw charm task for now.

$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
    application not enabled on pool 'default.rgw.control'
    application not enabled on pool '.rgw.root'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.

While agreeing that it's just a cosmetic warning, but monitoring components rely on the short health status like HEALTH_WARN. Fixing it for monitoring would be a bit high importance.

Revision history for this message
Arné Schreuder (arneschreuder) wrote :

Can confirm, I get the same in version 12.2.1

Revision history for this message
James Page (james-page) wrote :

Ceph will make a guess on the application once it starts to get filled; however the charms should set this on deployment to avoid early lifecycle health warnings in ceph.

This touches charms.ceph and ceph-mon (to add support to the ceph broker), and an update to all client charms to provide this information to the ceph-mon during pool creation.

Changed in charm-ceph-fs:
status: New → Triaged
Changed in charm-ceph-mon:
status: New → Triaged
Changed in charm-ceph-radosgw:
status: New → Triaged
Changed in charm-cinder-ceph:
status: New → Triaged
Changed in charm-glance:
status: New → Triaged
Changed in charm-nova-compute:
status: New → Triaged
Changed in charms.ceph:
status: New → Triaged
Changed in charm-ceph-fs:
importance: Undecided → Medium
Changed in charm-ceph-mon:
importance: Undecided → Medium
Changed in charm-ceph-radosgw:
importance: Undecided → Medium
Changed in charm-cinder-ceph:
importance: Undecided → Medium
Changed in charm-glance:
importance: Undecided → Medium
Changed in charm-nova-compute:
importance: Undecided → Medium
Changed in charms.ceph:
importance: Undecided → Medium
Ryan Beisner (1chb1n)
tags: added: uosci
Revision history for this message
Alexander Litvinov (alitvinov) wrote :

Also seeing the issue

ubuntu$ sudo ceph version
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)

ubuntu$ sudo ceph health detail
HEALTH_WARN application not enabled on 5 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 5 pool(s)
    application not enabled on pool 'gnocchi'
    application not enabled on pool 'default.rgw.control'
    application not enabled on pool '.rgw.root'
    application not enabled on pool 'glance'
    application not enabled on pool 'cinder-ceph'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

sudo ceph status
  cluster:
    id: 4ce1de56-383e-11e8-aedd-00163e99e06d
    health: HEALTH_WARN
            application not enabled on 2 pool(s)

sudo ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
    application not enabled on pool 'default.rgw.control'
    application not enabled on pool '.rgw.root'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.

ubuntu@juju-be77dc-0-lxd-0:~$ sudo ceph osd pool application enable default.rgw.control rgw
enabled application 'rgw' on pool 'default.rgw.control'
ubuntu@juju-be77dc-0-lxd-0:~$ sudo ceph osd pool application enable .rgw.root rgw
enabled application 'rgw' on pool '.rgw.root'
ubuntu@juju-be77dc-0-lxd-0:~$ sudo ceph health detail
HEALTH_OK

Nobuto Murata (nobuto)
description: updated
Revision history for this message
Ryan Beisner (1chb1n) wrote :

hi @nobuto - FYI, the openstack-on-lxd example bundles have been updated to use ceph-mon + ceph-osd instead of the deprecated legacy ceph charm.

Revision history for this message
James Hebden (ec0) wrote :

As a side-note for other intrepid travellers hitting this -
On a recent deployment, I also had to run 'ceph osd pool application enable default.rgw.buckets.index rgw' to get to HEALTH_OK.

Nobuto Murata (nobuto)
description: updated
Revision history for this message
Nobuto Murata (nobuto) wrote :

Subscribing ~field-high.

Originally I thought it was not applicable to Canonical's ~field-high since there was a workaround. However, this issue hit every single field deployment with Pike+ and caused a lot of confusions. Broken Ceph status out of the box is hard to be justified, so ~field-high is now appropriate for faster resolution I believe.

Changed in charm-ceph-mon:
assignee: nobody → Chris MacNaughton (chris.macnaughton)
Changed in charms.ceph:
assignee: nobody → Chris MacNaughton (chris.macnaughton)
Changed in charm-ceph-mon:
status: Triaged → In Progress
tags: added: canonical-bootstack
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charms.ceph (master)

Reviewed: https://review.openstack.org/564463
Committed: https://git.openstack.org/cgit/openstack/charms.ceph/commit/?id=978e561782105db0cf47c1076434f0739dd3855d
Submitter: Zuul
Branch: master

commit 978e561782105db0cf47c1076434f0739dd3855d
Author: Chris MacNaughton <email address hidden>
Date: Thu Apr 26 13:11:43 2018 +0200

    Add broker support for passing app_name

    In Ceph >= Luminous, application name needs to be set
    on a per-pool level to avoid health warnings. This
    change adds support for sending the application name
    over the broker channel from consuming charms.

    When a name is not sent from the other side of the
    relation, the application name will be set to "unknown"
    in Luminous and greater

    Change-Id: I99ae47b6802f50ea019751ffa328f11567cca567
    Closes-Bug: #1753640

Changed in charms.ceph:
status: Triaged → Fix Released
Revision history for this message
Chris Gregan (cgregan) wrote :

Field High SLA now requires that a estimated date for a fix is listed in the comments. Please provide this estimate for the open tasks.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph-mon (master)

Reviewed: https://review.openstack.org/564466
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-mon/commit/?id=22aa0c614601570e1d6de67936287e3ccf185432
Submitter: Zuul
Branch: master

commit 22aa0c614601570e1d6de67936287e3ccf185432
Author: Chris MacNaughton <email address hidden>
Date: Thu Apr 26 13:13:29 2018 +0200

    Add broker support for passing app_name

    In Ceph >= Luminous, application name needs to be set
    on a per-pool level to avoid health warnings. This
    change adds support for sending the application name
    over the broker channel from consuming charms.

    When a name is not sent from the other side of the
    relation, the application name will be set to "unknown"
    in Luminous and greater

    Change-Id: I1109251b08da20adaf3d677c38fc1aacfba29439
    Closes-Bug: #1753640
    Depends-On: I99ae47b6802f50ea019751ffa328f11567cca567

Changed in charm-ceph-mon:
status: In Progress → Fix Committed
James Page (james-page)
Changed in charm-ceph-mon:
milestone: none → 18.05
David Ames (thedac)
Changed in charm-ceph-mon:
status: Fix Committed → Fix Released
Revision history for this message
James Page (james-page) wrote :

The updates to ceph-mon will remove the warning messages - however there is a pending piece of work to actually provide the application name from each consuming charm that still needs to be done.

I don't consider this field-high as application name is not actually used for anything in ceph so downgrading the bug tasks to low.

Please can the field-high subscriber be removed from this bug.

Changed in charm-ceph-fs:
importance: Medium → Low
Changed in charm-cinder-ceph:
importance: Medium → Low
Changed in charm-nova-compute:
importance: Medium → Low
Changed in charm-ceph-radosgw:
importance: Medium → Low
Changed in charm-glance:
importance: Medium → Low
Revision history for this message
David Coronel (davecore) wrote :

I just this issue in a deployment with ceph-proxy as a cinder-ceph backend. My workaround was:

juju ssh ceph-mon/0 "sudo ceph osd pool application enable cinder-ceph-proxy rbd"

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

If you've got ceph in the model (as suggested by `juju ssh ceph-mon...`), why is ceph-proxy deployed?

Revision history for this message
David Coronel (davecore) wrote :

@chris: It's a second cinder backend that's connected to a ceph cluster in a different juju model on the same juju controller.

Revision history for this message
David Coronel (davecore) wrote :

I forgot to say I'm running that command in the other juju model where the ceph cluster is deployed.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.