[RBD] rbd_store_chunk_size in megabytes is an unwanted limitation
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Wishlist
|
Eric Harney |
Bug Description
In cinder ceph RBD backend configuration one can set rbd_store_
For a setup where bandwidth is what you need or want this is perfectly fine. For my usage iops is king so I would like the object size to be lower then 1MB. (4KB and 8KB are iops wise very similar but 4KB has lost too much on bandwidth, 32KB or 64KB would be optimal for my usecase, 128KB or 256KB would already be a big improvement.
If there is no mayor reason why the configuration is so limiting I would like the option to get the full potential out of my ceph cluster. So I suggest two small changes:
1) use a order based config value matching the ceph way of configuration or at least allow float numbers for rbd_store_
2) Why force the hint to be set at all and not just fall back to ceph pool config on ceph side with rbd_default_order
The same applies to Glance.
tags: | added: xena |
Changed in cinder: | |
importance: | Undecided → Wishlist |
tags: | added: drivers rbd |
summary: |
- rbd_store_chunk_size in megabytes is unwanted limitation + [RBD] rbd_store_chunk_size in megabytes is an unwanted limitation |
Changed in cinder: | |
assignee: | nobody → Eric Harney (eharney) |
Alexander Binzxxxxxx,
We do not have much concrete performance data to analyze, so more information about the actual problem would be helpful. Please remember to update this bug report.
It is a good idea to improve this, but it needs a lot of consideration. We should address this area because there are still some vulnerabilities that we need to address with RBD including sector sizes (512 vs 4k). However, adding a new configuration value that allows developers to set arbitrary values is not necessarily the right solution.
Regarding question 2: Specifying the chunk size prevents situations where images cannot be moved between pools during migration, such as cinder<->glance, etc. However, it is a good question that should be investigated further.
This bug was discussed in the bug session this week: https:/ /meetings. opendev. org/meetings/ cinder_ bs/2022/ cinder_ bs.2022- 05-04-15. 01.log. html#l- 31