Comment 5 for bug 1340448

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

This is not the only place where %-only based metric is not appropriate for all sizes, the other one being default swap size is a simple multiple of RAM still, despite extreme cases of RAM >> HDD (in e.g. high-mem VM cases) and very fast HDD (e.g. NVMe drives or RAID-arrays speed up with SSD cache layers).

Adding in that as well can eat up a lot of disk space, and at times even expensive one.

Whilst we all agree that performance does degrade at the extremely full disk-space, I don't buy that 51GB of free disk-space is degraded performance given an e.g. RAID1 1TB filesystem and typical file-sizes / extends. I'll poke cking, to check if he has performance degradation results for various file-systems w.r.t. % filled for various max sizes.

5% is sensible and appropriate for a wide distribution of filesystem sizes, but tiny disks may want more reserve and ditto large disks may want less. Thus the calculated reserve shouldn't be a linear 5%, but some non-linear distribution or scaled value.

One reasonable distribution algorithm for scaling disk-space limits (in the context of nagios warning/critical levels monitoring) that I have seen is implemented in checkmk project https://mathias-kettner.de/checkmk_filesystems.html#H1:The df magic number . However, the margins there increase too rapidly at the small disk-space sizes. Thus further tweaking and/or different distributions would be needed.

Theodore, would you be open to having a tune option to make reserve level adaptive rather than static %? (default would be static)