Ceph pools size need to be improved

Bug #1533666 reported by Stepan Rogov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Invalid
Undecided
Unassigned

Bug Description

Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)

I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
    SIZE AVAIL RAW USED %RAW USED
    89367G 89366G 982M 0
POOLS:
    NAME ID USED %USED MAX AVAIL OBJECTS
    data 0 0 0 44683G 0
    metadata 1 0 0 44683G 0
    rbd 2 0 0 44683G 0
    images 3 12976k 0 44683G 5
    volumes 4 0 0 44683G 1
    backups 5 0 0 44683G 0
    .rgw.root 6 840 0 44683G 3
    .rgw.control 7 0 0 44683G 8
    .rgw 8 0 0 44683G 0
    .rgw.gc 9 0 0 44683G 32
    .users.uid 10 564 0 44683G 2
    .users 11 42 0 44683G 3
    compute 12 0 0 44683G 2

It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size during a deployment process (in the general case for most users) and remove unusable pools (if it is possible).

Stepan Rogov (srogov)
description: updated
description: updated
description: updated
description: updated
Stepan Rogov (srogov)
Changed in fuel:
status: New → Invalid
Maciej Relewicz (rlu)
tags: added: area-library
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.