controller-0:/opt/platform$ sudo grep "chunk_size" ./* --color -r -n ./helm/19.09/platform-integ-apps/1.0-8/kube-system-rbd-provisioner.yaml:14: chunk_size: 64 ./helm/19.09/stx-openstack/1.0-19/openstack-nova.yaml:11: - rbd_chunk_size: 512 ./helm/19.09/stx-openstack/1.0-19/openstack-cinder.yaml:24: chunk_size: 8 ./helm/19.09/stx-openstack/1.0-19/openstack-cinder.yaml:29: chunk_size: 8
Currently chunk_size for all ceph pool is hardcode
stx/config/sysinv/sysinv/sysinv/sysinv/helm/rbd_provisioner.py "chunk_size": 64,
stx/config/sysinv/sysinv/sysinv/sysinv/helm/nova.py 'rbd_chunk_size': constants.CEPH_POOL_EPHEMERAL_PG_NUM
stx/config/sysinv/sysinv/sysinv/sysinv/helm/cinder.py 'chunk_size': constants.CEPH_POOL_VOLUMES_CHUNK_SIZE,
Depends on the guide in ceph documentation. https://docs.ceph.com/docs/master/rados/operations/placement-groups/#placement-groups-tradeoffs
this chunk size should be calculated with application usage, osd number and replica
controller- 0:/opt/ platform$ sudo grep "chunk_size" ./* --color -r -n 19.09/platform- integ-apps/ 1.0-8/kube- system- rbd-provisioner .yaml:14: chunk_size: 64 19.09/stx- openstack/ 1.0-19/ openstack- nova.yaml: 11: - rbd_chunk_size: 512 19.09/stx- openstack/ 1.0-19/ openstack- cinder. yaml:24: chunk_size: 8 19.09/stx- openstack/ 1.0-19/ openstack- cinder. yaml:29: chunk_size: 8
./helm/
./helm/
./helm/
./helm/
Currently chunk_size for all ceph pool is hardcode
stx/config/ sysinv/ sysinv/ sysinv/ sysinv/ helm/rbd_ provisioner. py
"chunk_ size": 64,
stx/config/ sysinv/ sysinv/ sysinv/ sysinv/ helm/nova. py
'rbd_ chunk_size' : constants. CEPH_POOL_ EPHEMERAL_ PG_NUM
stx/config/ sysinv/ sysinv/ sysinv/ sysinv/ helm/cinder. py
'chunk_ size': constants. CEPH_POOL_ VOLUMES_ CHUNK_SIZE,
Depends on the guide in ceph documentation. /docs.ceph. com/docs/ master/ rados/operation s/placement- groups/ #placement- groups- tradeoffs
https:/
this chunk size should be calculated with application usage, osd number and replica