charm needs a pg_num cap when deploying to avoid creating too many pgs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Charm Helpers |
Invalid
|
Medium
|
Unassigned | ||
ceph-radosgw (Juju Charms Collection) |
Fix Released
|
Medium
|
Billy Olsen | ||
cinder-ceph (Juju Charms Collection) |
Fix Released
|
Medium
|
Billy Olsen | ||
glance (Juju Charms Collection) |
Fix Released
|
Medium
|
Billy Olsen | ||
nova-compute (Juju Charms Collection) |
Fix Released
|
Medium
|
Billy Olsen |
Bug Description
As PG cannot be decreased, charms should create fewer PGs to be fail-safe.
http://
$ sudo ceph status
cluster 026593f1-
health HEALTH_WARN
too many PGs per OSD (337 > max 300)
monmap e2: 3 mons at {angha=
osdmap e26: 5 osds: 5 up, 5 in
pgmap v835: 562 pgs, 4 pools, 12270 MB data, 1568 objects
36779 MB used, 2286 GB / 2322 GB avail
client io 26155 kB/s wr, 6 op/s
$ sudo ceph osd dump | grep pg_num
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 'nova' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 13 flags hashpspool stripe_width 0
pool 2 'glance' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 25 flags hashpspool stripe_width 0
pool 3 'cinder-ceph' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 166 pgp_num 166 last_change 20 flags hashpspool stripe_width 0
$ sudo ceph osd pool set glance pg_num 64
Error EEXIST: specified pg_num 64 <= current 166
Related branches
- Jorge Niedbalski (community): Approve
- Chris Holcombe (community): Approve
- Edward Hope-Morley: Pending requested
-
Diff: 429 lines (+209/-59)2 files modifiedcharmhelpers/contrib/storage/linux/ceph.py (+129/-41)
tests/contrib/storage/test_linux_ceph.py (+80/-18)
tags: | added: cpec |
Changed in cinder-ceph (Juju Charms Collection): | |
milestone: | none → 16.01 |
Changed in glance (Juju Charms Collection): | |
milestone: | none → 16.01 |
Changed in nova-compute (Juju Charms Collection): | |
milestone: | none → 16.01 |
Changed in cinder-ceph (Juju Charms Collection): | |
importance: | Undecided → Medium |
Changed in nova-compute (Juju Charms Collection): | |
importance: | Undecided → Medium |
Changed in glance (Juju Charms Collection): | |
importance: | Undecided → Medium |
Changed in charm-helpers: | |
importance: | Undecided → Medium |
status: | New → Invalid |
Changed in cinder-ceph (Juju Charms Collection): | |
status: | New → Triaged |
Changed in glance (Juju Charms Collection): | |
status: | New → Triaged |
Changed in nova-compute (Juju Charms Collection): | |
status: | New → Triaged |
Changed in glance (Juju Charms Collection): | |
milestone: | 16.01 → 16.04 |
Changed in nova-compute (Juju Charms Collection): | |
milestone: | 16.01 → 16.04 |
Changed in cinder-ceph (Juju Charms Collection): | |
milestone: | 16.01 → 16.04 |
tags: | added: canonical-bootstack |
Changed in glance (Juju Charms Collection): | |
milestone: | 16.04 → 16.07 |
Changed in nova-compute (Juju Charms Collection): | |
milestone: | 16.04 → 16.07 |
Changed in cinder-ceph (Juju Charms Collection): | |
milestone: | 16.04 → 16.07 |
tags: | added: kanban-cross-team |
tags: | removed: kanban-cross-team |
Changed in cinder-ceph (Juju Charms Collection): | |
assignee: | nobody → Billy Olsen (billy-olsen) |
Changed in glance (Juju Charms Collection): | |
assignee: | nobody → Billy Olsen (billy-olsen) |
Changed in nova-compute (Juju Charms Collection): | |
assignee: | nobody → Billy Olsen (billy-olsen) |
Changed in ceph-radosgw (Juju Charms Collection): | |
importance: | Undecided → Medium |
assignee: | nobody → Billy Olsen (billy-olsen) |
milestone: | none → 16.07 |
Changed in glance (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
Changed in nova-compute (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
Changed in cinder-ceph (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
Changed in ceph-radosgw (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
This issue can be reproduced easily when nova, glance, cinder-ceph pools are created in one ceph cluster.
[hooks/ charmhelpers/ contrib/ storage/ linux/ceph. py] pool(service, name, replicas=3): service, name): .format( name),
level= WARNING)
def create_
"""Create a new RADOS pool."""
if pool_exists(
log("Ceph pool {} already exists, skipping creation"
return
# Calculate the number of placement groups based
# on upstream recommended best practices.
osds = get_osds(service)
if osds:
pgnum = (len(osds) * 100 // replicas)
else:
# NOTE(james-page): Default to 200 for older ceph versions
# which don't support OSD query from cli
pgnum = 200
cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
check_call(cmd)