cinder charm sets insufficient pg count for rbd pool
Bug #1226823 reported by
Edward Hope-Morley
This bug affects 2 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
ceph (Juju Charms Collection) |
Invalid
|
Undecided
|
Unassigned | ||
cinder (Juju Charms Collection) |
Fix Released
|
Undecided
|
Edward Hope-Morley |
Bug Description
When cinder is related with ceph and subsequently creates a new 'cinder' pool, it does not set a sufficient number of placement groups on that pool. The default is extremely low (~8) which results in a small number of OSDs rapidly filling up.
Related branches
lp:~hopem/charms/precise/cinder/lp1226823
- James Page: Pending requested
- Adam Gandelman: Pending requested
-
Diff: 51 lines (+23/-2)3 files modifiedconfig.yaml (+10/-0)
hooks/cinder-hooks (+12/-1)
revision (+1/-1)
lp:~hopem/charms/precise/cinder/python-redux.lp1226823
- Adam Gandelman: Pending requested
-
Diff: 93 lines (+21/-8)6 files modifiedconfig.yaml (+10/-0)
hooks/cinder_hooks.py (+3/-1)
hooks/cinder_utils.py (+2/-2)
revision (+1/-1)
unit_tests/test_cinder_hooks.py (+1/-1)
unit_tests/test_cinder_utils.py (+4/-3)
Changed in charms: | |
assignee: | nobody → Edward Hope-Morley (hopem) |
status: | New → In Progress |
affects: | charms → ceph (Juju Charms Collection) |
Changed in cinder (Juju Charms Collection): | |
status: | New → In Progress |
assignee: | nobody → Edward Hope-Morley (hopem) |
Changed in ceph (Juju Charms Collection): | |
assignee: | Edward Hope-Morley (hopem) → nobody |
status: | In Progress → New |
description: | updated |
Changed in ceph (Juju Charms Collection): | |
status: | New → Invalid |
Changed in cinder (Juju Charms Collection): | |
status: | In Progress → Fix Released |
To post a comment you must log in.
Commented on the merge but I think this is a general issue with the Ceph charm. I think the details of replication count / placement groups / etc are internal details of the ceph charm. A remote service (cinder, glance, etc) shouldn't care about them. I'd love it if we can place the responsibility of creating ceph pools in the ceph charm, similar to how it is the mysql's job to create a functioning database and hand it back to the related service. Of course, that is a much larger code change across many charms. In the meantime, perhaps setting the PG count to 1024 during cinder's pool creation is an acceptable work around, assuming that is a sane default.
Copied from the merge:
"We need a better way of managing these details. I don't believe the remote client services should be dictating the particulars of the ceph pools they use. I'd be happier if, via the relation, a service like cinder could simply request the creation of some pools, eg ['cinder', 'images', 'otherstuff'] and it is up to the ceph charm to create those pools with parameters that are sensible for the size/capacity of the deployed ceph cluster, probably based on its charm config."