rgw uses default values when creating pools
Bug #1476749 reported by
Edward Hope-Morley
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
ceph (Juju Charms Collection) |
Fix Released
|
High
|
Edward Hope-Morley | ||
ceph-radosgw (Juju Charms Collection) |
Fix Released
|
High
|
Edward Hope-Morley |
Bug Description
The rados gateway creates it's own pools and uses default values for settings such as number of placement groups which for most cases will be far too low. The correct way would be to have the radosgw charm create pools with optimal settings derived from e.g. the number of OSDs in the cluster. This is how it is done for other ceph clients in the charms (e.g. cinder, glance, nova).
For example: http://
Related branches
lp:~hopem/charms/trusty/ceph-radosgw/lp1476749
- Liam Young (community): Needs Fixing
- Chris MacNaughton (community): Approve
- OpenStack Charmers: Pending requested
-
Diff: 1253 lines (+554/-77)20 files modifiedconfig.yaml (+21/-0)
hooks/ceph.py (+51/-0)
hooks/charmhelpers/cli/__init__.py (+3/-3)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+102/-2)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
hooks/charmhelpers/contrib/openstack/context.py (+25/-9)
hooks/charmhelpers/contrib/openstack/neutron.py (+16/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+22/-1)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+51/-35)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/hookenv.py (+14/-0)
hooks/charmhelpers/core/host.py (+34/-3)
hooks/charmhelpers/core/hugepage.py (+2/-0)
hooks/charmhelpers/core/services/helpers.py (+5/-2)
hooks/charmhelpers/core/templating.py (+13/-6)
hooks/charmhelpers/fetch/__init__.py (+1/-1)
hooks/hooks.py (+14/-5)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+102/-2)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+25/-3)
unit_tests/test_hooks.py (+18/-0)
lp:~hopem/charms/trusty/ceph/lp1476749
- Chris MacNaughton (community): Approve
- OpenStack Charmers: Pending requested
-
Diff: 147 lines (+78/-12)3 files modifiedhooks/ceph_broker.py (+16/-1)
hooks/ceph_hooks.py (+20/-9)
unit_tests/test_ceph_broker.py (+42/-2)
Changed in ceph-radosgw (Juju Charms Collection): | |
status: | New → Confirmed |
importance: | Undecided → Low |
status: | Confirmed → Triaged |
tags: | added: cpec |
Changed in ceph (Juju Charms Collection): | |
status: | In Progress → Fix Committed |
Changed in ceph-radosgw (Juju Charms Collection): | |
status: | In Progress → Fix Committed |
Changed in ceph-radosgw (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
Changed in ceph (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
While the current setting is clearly suboptimal and like to cause performance problems, it is likely that always applying the same pg_num to all RGW pools is not required since some are likely to grow more than others. I will carry out some tests to see what kind of distribution we get when loading RGW.
I am going to bump the priority here though since this is likely to rapidly cause performance problems.