rgw uses default values when creating pools

Bug #1476749 reported by Edward Hope-Morley
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
ceph (Juju Charms Collection)
Fix Released
High
Edward Hope-Morley
ceph-radosgw (Juju Charms Collection)
Fix Released
High
Edward Hope-Morley

Bug Description

The rados gateway creates it's own pools and uses default values for settings such as number of placement groups which for most cases will be far too low. The correct way would be to have the radosgw charm create pools with optimal settings derived from e.g. the number of OSDs in the cluster. This is how it is done for other ceph clients in the charms (e.g. cinder, glance, nova).

For example: http://paste.ubuntu.com/13208280/ where we can see that each rgw pools hava a pg_num of 8 rather than num_osds * 100 / replicas (100 in this case).

Related branches

James Page (james-page)
Changed in ceph-radosgw (Juju Charms Collection):
status: New → Confirmed
importance: Undecided → Low
status: Confirmed → Triaged
Nobuto Murata (nobuto)
tags: added: cpec
Revision history for this message
Edward Hope-Morley (hopem) wrote :

While the current setting is clearly suboptimal and like to cause performance problems, it is likely that always applying the same pg_num to all RGW pools is not required since some are likely to grow more than others. I will carry out some tests to see what kind of distribution we get when loading RGW.

I am going to bump the priority here though since this is likely to rapidly cause performance problems.

description: updated
Changed in ceph-radosgw (Juju Charms Collection):
importance: Low → High
assignee: nobody → Edward Hope-Morley (hopem)
milestone: none → 16.01
status: Triaged → In Progress
tags: added: opn sts
tags: added: openstack
removed: opn
Revision history for this message
Edward Hope-Morley (hopem) wrote :

So .rgw.buckets is the pool that will likely always have more data in it than the rest:

# rados df
pool name KB objects clones degraded unfound rd rd KB wr wr KB
.rgw 9 40 0 0 0 1998 1444 860 200
.rgw.buckets 8799860 18429 0 0 0 133013 21605579 75657 10524704
.rgw.buckets.index 0 60 0 0 0 66442 66760 65664 0
.rgw.control 0 8 0 0 0 0 0 0 0
.rgw.gc 0 32 0 0 0 5548 5516 9176 0
.rgw.root 1 3 0 0 0 282 188 3 3
.users.uid 1 2 0 0 0 1233 1540 1020 1
rbd 0 0 0 0 0 0 0 0 0
  total used 26900272 18574
  total avail 1377428
  total space 28277700

So we should at least give that pool (num_osds * 100 / replicas) PGs

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Also needs the ceph charm ceph-radosgw interface to handle broker requests.

Changed in ceph (Juju Charms Collection):
status: New → In Progress
importance: Undecided → High
assignee: nobody → Edward Hope-Morley (hopem)
milestone: none → 16.01
Changed in ceph (Juju Charms Collection):
status: In Progress → Fix Committed
Changed in ceph-radosgw (Juju Charms Collection):
status: In Progress → Fix Committed
James Page (james-page)
Changed in ceph-radosgw (Juju Charms Collection):
status: Fix Committed → Fix Released
Changed in ceph (Juju Charms Collection):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.