ceph-osd configurations are not optimized for all flash deployment
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Triaged
|
Wishlist
|
Unassigned |
Bug Description
As far as I understand, ceph-osd charm has a config option as "autotune" to apply some optimizations for HDDs. However, it's not optimized for all-flash deployment yet.
[lib/ceph/utils.py]
def tune_dev(
"""Try to make some intelligent decisions with HDD tuning. Future work will
include optimizing SSDs.
According to the Intel's tuning info:
http://
I recognized a huge improvement on total IOPS optimized scenario. In short, I only saw 40K total IOPS on 4K write from multi clients, but after applying the configuration below, I see 140K total IOPS with the same test.
It would be nice if ceph-osd charm is capable of applying some tunings for all flash deployment.
====
My config-flags for ceph-osd charm converted from:
http://
config-flags: >
{
global: {
"rbd readahead disable after bytes": 0,
"rbd readahead max bytes": 4194304,
"osd pg bits": 8,
"osd pgp bits": 8,
perf: true,
"rbd cache": false
},
osd: {
}
}
description: | updated |
tags: | added: cpe-onsite |
description: | updated |
tags: | added: 4010 |
Feature request, triaging as wishlist.