Ceph pools size need to be improved
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size during a deployment process (in the general case for most users) and remove unusable pools (if it is possible).
description: | updated |
description: | updated |
description: | updated |
description: | updated |
Changed in fuel: | |
status: | New → Invalid |
tags: | added: area-library |