2016-01-13 13:07:52 |
Stepan Rogov |
bug |
|
|
added bug |
2016-01-13 13:09:10 |
Stepan Rogov |
description |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in common case) during a deployment process. |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in the general case for most users) during a deployment process. |
|
2016-01-13 13:09:34 |
Stepan Rogov |
description |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in the general case for most users) during a deployment process. |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in the general case for most users) during a deployment process. |
|
2016-01-13 13:10:50 |
Stepan Rogov |
description |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in the general case for most users) during a deployment process. |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in the general case for most users) during a deployment process and remove unusable pools (if it is possible). |
|
2016-01-13 13:11:24 |
Stepan Rogov |
description |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size(in the general case for most users) during a deployment process and remove unusable pools (if it is possible). |
Fuel 7.0
After deployment with the next parameters:
Ceph RBD for volumes (Cinder)
Ceph RBD for images (Glance)
Ceph RBD for ephemeral volumes (Nova)
Ceph RadosGW for objects (Swift API)
I have the next pools in ceph:
root@node-5:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89367G 89366G 982M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
data 0 0 0 44683G 0
metadata 1 0 0 44683G 0
rbd 2 0 0 44683G 0
images 3 12976k 0 44683G 5
volumes 4 0 0 44683G 1
backups 5 0 0 44683G 0
.rgw.root 6 840 0 44683G 3
.rgw.control 7 0 0 44683G 8
.rgw 8 0 0 44683G 0
.rgw.gc 9 0 0 44683G 32
.users.uid 10 564 0 44683G 2
.users 11 42 0 44683G 3
compute 12 0 0 44683G 2
It seems, there is too much space for the next pools: .rgw*, .users*, data, metadata, rbd.
Also it seems that we can remove the default ceph pools(data, metadata).
I propose to calculate somehow optimal pools size during a deployment process (in the general case for most users) and remove unusable pools (if it is possible). |
|
2016-01-13 14:10:26 |
Stepan Rogov |
fuel: status |
New |
Invalid |
|
2016-01-18 08:56:52 |
Maciej Relewicz |
tags |
ceph puppet |
area-library ceph puppet |
|