Comment 10 for bug 1844164

Revision history for this message
chen haochuan (martin1982) wrote :

controller-0:/opt/platform$ sudo grep "chunk_size" ./* --color -r -n
./helm/19.09/platform-integ-apps/1.0-8/kube-system-rbd-provisioner.yaml:14: chunk_size: 64
./helm/19.09/stx-openstack/1.0-19/openstack-nova.yaml:11: - rbd_chunk_size: 512
./helm/19.09/stx-openstack/1.0-19/openstack-cinder.yaml:24: chunk_size: 8
./helm/19.09/stx-openstack/1.0-19/openstack-cinder.yaml:29: chunk_size: 8

Currently chunk_size for all ceph pool is hardcode

stx/config/sysinv/sysinv/sysinv/sysinv/helm/rbd_provisioner.py
                "chunk_size": 64,

stx/config/sysinv/sysinv/sysinv/sysinv/helm/nova.py
            'rbd_chunk_size': constants.CEPH_POOL_EPHEMERAL_PG_NUM

stx/config/sysinv/sysinv/sysinv/sysinv/helm/cinder.py
                'chunk_size': constants.CEPH_POOL_VOLUMES_CHUNK_SIZE,

Depends on the guide in ceph documentation.
https://docs.ceph.com/docs/master/rados/operations/placement-groups/#placement-groups-tradeoffs

this chunk size should be calculated with application usage, osd number and replica