This could be done on all machines that house disks by setting the system scheduler to cfq, it looks like changing it per-disk, permanently will be a bit of a hack with a service to write `cfq` to the scheduler file. On the other hand, according to this (http://www.ilsistemista.net/index.php/linux-a-unix/38-linux-i-o-schedulers-benchmarked-anticipatory-vs-cfq-vs-deadline-vs-noop.html) somewhat older benchmark, cfq vs deadline isn't a huge performance killer unless using direct-io in which case deadline performs a bit better.
@elmo how would you feel about the ceph[-osd] units having the default scheduler be cfq instead of deadline?
This could be done on all machines that house disks by setting the system scheduler to cfq, it looks like changing it per-disk, permanently will be a bit of a hack with a service to write `cfq` to the scheduler file. On the other hand, according to this (http:// www.ilsistemist a.net/index. php/linux- a-unix/ 38-linux- i-o-schedulers- benchmarked- anticipatory- vs-cfq- vs-deadline- vs-noop. html) somewhat older benchmark, cfq vs deadline isn't a huge performance killer unless using direct-io in which case deadline performs a bit better.
@elmo how would you feel about the ceph[-osd] units having the default scheduler be cfq instead of deadline?