Comment 2 for bug 1971154

Revision history for this message
Alexander Binzxxxxxx (devil000000) wrote :

well the size used is of course a tradeoff between bandwidth and iops but since you can change is per pool as well as per rdb object (on ceph not in openstack) you could configure it to your needs.

ceph performance data can be found a lot but here are some of my simplified and brief results:
command used:
rados bench -p volumes 5 write -b 4096 -t 2048 -O $size_in_bytes
on the cluster here:
4k => avg iops: 32k bandwidth: 125MB/s
8k => avg iops: 34k bandwidth: 134MB/s
16k => avg iops: 38k bandwidth: 148MB/s
32k => avg iops: 40k bandwidth: 157MB/s
64k => avg iops: 41k bandwidth: 160MB/s
128k => avg iops: 39k bandwidth: 154MB/s
256k => avg iops: 36k bandwidth: 143MB/s
512k => avg iops: 32k bandwidth: 124MB/s
1M => avg iops: 25k bandwidth: 100MB/s
2M => avg iops: 18k bandwidth: 71MB/s
4M => avg iops: 14k bandwidth: 53MB/s
8M => avg iops: 10k bandwidth: 41MB/s
note that a lot of caching is involved here to dampen the direct influence on disk write speeds so take my numbers with a grain or gram of salt. also in this test there may other factors involved.

Regarding question 2: as far as i know the client can overrule the sizes and transfers should be possible anyway. so a ceph pool may contain differently chunked RDB images anyway. Even if the RDB would need rechunking somewhere on a pool transfer this may be the better solution anyway.