2018-03-06 04:54:09 |
Nobuto Murata |
bug |
|
|
added bug |
2018-03-06 04:54:19 |
Nobuto Murata |
tags |
|
cpe-onsite |
|
2018-03-09 03:13:08 |
Nobuto Murata |
description |
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ sudo ceph health detail
sudo: unable to resolve host juju-3ea7d7-1
HEALTH_WARN application not enabled on 4 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 4 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
application not enabled on pool 'glance'
application not enabled on pool 'cinder-ceph'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application |
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application
ceph osd dump | grep pool
pool 1 'default.rgw.buckets' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 4 flags hashpspool stripe_width 0
pool 2 'default.rgw' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 6 flags hashpspool stripe_width 0
pool 3 'default.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 8 flags hashpspool stripe_width 0
pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 10 flags hashpspool stripe_width 0
pool 5 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 12 flags hashpspool stripe_width 0
pool 6 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 14 flags hashpspool stripe_width 0
pool 7 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 16 flags hashpspool stripe_width 0
pool 8 'default.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 18 flags hashpspool stripe_width 0
pool 9 'default.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 20 flags hashpspool stripe_width 0
pool 10 'default.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 22 flags hashpspool stripe_width 0
pool 11 'default.users' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 24 flags hashpspool stripe_width 0
pool 12 'default.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 26 flags hashpspool stripe_width 0
pool 13 'default.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 28 flags hashpspool stripe_width 0
pool 14 'default.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 30 flags hashpspool stripe_width 0
pool 15 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 32 flags hashpspool stripe_width 0
pool 16 'gnocchi' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 34 flags hashpspool stripe_width 0
pool 17 'glance' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 179 flags hashpspool stripe_width 0
pool 18 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 183 flags hashpspool stripe_width 0 application rgw
pool 19 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 185 flags hashpspool stripe_width 0 application rgw
pool 20 'cinder-ceph' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 190 flags hashpspool stripe_width 0 |
|
2018-03-09 03:22:49 |
Nobuto Murata |
bug task added |
|
charm-ceph-radosgw |
|
2018-03-15 11:37:38 |
James Page |
bug task added |
|
charms.ceph |
|
2018-03-15 11:37:50 |
James Page |
bug task added |
|
charm-glance |
|
2018-03-15 11:38:03 |
James Page |
bug task added |
|
charm-cinder-ceph |
|
2018-03-15 11:38:17 |
James Page |
bug task added |
|
charm-nova-compute |
|
2018-03-15 11:39:55 |
James Page |
bug task added |
|
charm-ceph-fs |
|
2018-03-15 11:41:13 |
James Page |
charm-ceph-fs: status |
New |
Triaged |
|
2018-03-15 11:41:14 |
James Page |
charm-ceph-mon: status |
New |
Triaged |
|
2018-03-15 11:41:14 |
James Page |
charm-ceph-radosgw: status |
New |
Triaged |
|
2018-03-15 11:41:15 |
James Page |
charm-cinder-ceph: status |
New |
Triaged |
|
2018-03-15 11:41:16 |
James Page |
charm-glance: status |
New |
Triaged |
|
2018-03-15 11:41:17 |
James Page |
charm-nova-compute: status |
New |
Triaged |
|
2018-03-15 11:41:18 |
James Page |
charms.ceph: status |
New |
Triaged |
|
2018-03-15 11:41:19 |
James Page |
charm-ceph-fs: importance |
Undecided |
Medium |
|
2018-03-15 11:41:20 |
James Page |
charm-ceph-mon: importance |
Undecided |
Medium |
|
2018-03-15 11:41:20 |
James Page |
charm-ceph-radosgw: importance |
Undecided |
Medium |
|
2018-03-15 11:41:21 |
James Page |
charm-cinder-ceph: importance |
Undecided |
Medium |
|
2018-03-15 11:41:22 |
James Page |
charm-glance: importance |
Undecided |
Medium |
|
2018-03-15 11:41:22 |
James Page |
charm-nova-compute: importance |
Undecided |
Medium |
|
2018-03-15 11:41:23 |
James Page |
charms.ceph: importance |
Undecided |
Medium |
|
2018-04-04 15:43:51 |
Sandor Zeestraten |
bug |
|
|
added subscriber Sandor Zeestraten |
2018-04-04 19:30:04 |
Ryan Beisner |
tags |
cpe-onsite |
cpe-onsite uosci |
|
2018-04-04 23:34:25 |
Dmitrii Shcherbakov |
bug |
|
|
added subscriber Dmitrii Shcherbakov |
2018-04-05 00:23:20 |
Nobuto Murata |
description |
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application
ceph osd dump | grep pool
pool 1 'default.rgw.buckets' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 4 flags hashpspool stripe_width 0
pool 2 'default.rgw' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 6 flags hashpspool stripe_width 0
pool 3 'default.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 8 flags hashpspool stripe_width 0
pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 10 flags hashpspool stripe_width 0
pool 5 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 12 flags hashpspool stripe_width 0
pool 6 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 14 flags hashpspool stripe_width 0
pool 7 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 16 flags hashpspool stripe_width 0
pool 8 'default.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 18 flags hashpspool stripe_width 0
pool 9 'default.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 20 flags hashpspool stripe_width 0
pool 10 'default.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 22 flags hashpspool stripe_width 0
pool 11 'default.users' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 24 flags hashpspool stripe_width 0
pool 12 'default.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 26 flags hashpspool stripe_width 0
pool 13 'default.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 28 flags hashpspool stripe_width 0
pool 14 'default.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 30 flags hashpspool stripe_width 0
pool 15 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 32 flags hashpspool stripe_width 0
pool 16 'gnocchi' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 34 flags hashpspool stripe_width 0
pool 17 'glance' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 179 flags hashpspool stripe_width 0
pool 18 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 183 flags hashpspool stripe_width 0 application rgw
pool 19 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 185 flags hashpspool stripe_width 0 application rgw
pool 20 'cinder-ceph' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 190 flags hashpspool stripe_width 0 |
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application
[WORKAROUND]
$ juju run --unit ceph-mon/0 '
ceph osd pool application enable default.rgw.control rgw
ceph osd pool application enable .rgw.root rgw
ceph osd pool application enable glance rbd
ceph osd pool application enable cinder-ceph rbd
ceph osd pool application enable gnocchi rgw
' |
|
2018-04-06 05:35:10 |
Dominique Poulain |
bug |
|
|
added subscriber Dominique Poulain |
2018-04-26 05:50:45 |
Nobuto Murata |
description |
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application
[WORKAROUND]
$ juju run --unit ceph-mon/0 '
ceph osd pool application enable default.rgw.control rgw
ceph osd pool application enable .rgw.root rgw
ceph osd pool application enable glance rbd
ceph osd pool application enable cinder-ceph rbd
ceph osd pool application enable gnocchi rgw
' |
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ ceph health detail
HEALTH_WARN application not enabled on 2 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 2 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-app-not-enabled
http://docs.ceph.com/docs/master/rados/operations/pools/#associate-pool-to-application
[WORKAROUND]
$ juju run --unit ceph-mon/0 '
ceph osd pool application enable default.rgw.control rgw
ceph osd pool application enable default.rgw.buckets.index rgw
ceph osd pool application enable .rgw.root rgw
ceph osd pool application enable glance rbd
ceph osd pool application enable cinder-ceph rbd
ceph osd pool application enable gnocchi rgw
' |
|
2018-04-26 05:51:11 |
Nobuto Murata |
bug |
|
|
added subscriber Canonical Field High |
2018-04-26 11:54:52 |
Chris MacNaughton |
charm-ceph-mon: assignee |
|
Chris MacNaughton (chris.macnaughton) |
|
2018-04-26 11:54:56 |
Chris MacNaughton |
charms.ceph: assignee |
|
Chris MacNaughton (chris.macnaughton) |
|
2018-04-26 12:09:16 |
OpenStack Infra |
charm-ceph-mon: status |
Triaged |
In Progress |
|
2018-04-30 06:21:37 |
Craige McWhirter |
tags |
cpe-onsite uosci |
canonical-bootstack cpe-onsite uosci |
|
2018-04-30 14:59:11 |
OpenStack Infra |
charms.ceph: status |
Triaged |
Fix Released |
|
2018-05-02 17:01:28 |
Alvaro Uria |
bug |
|
|
added subscriber The Canonical Sysadmins |
2018-05-02 17:01:38 |
Alvaro Uria |
bug |
|
|
added subscriber Canonical IS BootStack |
2018-05-09 12:05:01 |
OpenStack Infra |
charm-ceph-mon: status |
In Progress |
Fix Committed |
|
2018-06-06 08:46:15 |
James Page |
charm-ceph-mon: milestone |
|
18.05 |
|
2018-06-11 22:12:59 |
David Ames |
charm-ceph-mon: status |
Fix Committed |
Fix Released |
|
2018-12-03 09:25:38 |
James Page |
charm-ceph-fs: importance |
Medium |
Low |
|
2018-12-03 09:25:42 |
James Page |
charm-cinder-ceph: importance |
Medium |
Low |
|
2018-12-03 09:25:45 |
James Page |
charm-nova-compute: importance |
Medium |
Low |
|
2018-12-03 09:25:49 |
James Page |
charm-ceph-radosgw: importance |
Medium |
Low |
|
2018-12-03 09:25:51 |
James Page |
charm-glance: importance |
Medium |
Low |
|
2018-12-03 10:15:54 |
Nobuto Murata |
removed subscriber Canonical Field High |
|
|
|
2020-07-07 15:18:13 |
David Coronel |
bug |
|
|
added subscriber David Coronel |