As a follow-up with lengthy discussions from multiple teams. The decision to upgrade the default memory limit is being put on hold indefinitely. Testing was done following the instructions provided here [1].
Unfortunately, the Ubuntu SRU process does not allow for behavior changes once a stable release is out. In this case, this would not only change the current behavior by reducing memory available to the host OS, but raises regression potential which could lead to OOMs during or after the system (re)boot. This might lead to un-bootable systems if it gets it wrong.
- What we've considered:
There is an auto option, which "works", but is somewhat brain dead in its intelligence to get this right.
Dynamically calculating the value appears to be a promising solution but would be subject to its own issues too, including memory footprint changing due to changes in memory allocation dynamics in the network or storage drivers, which would require repeatedly updating the math used to calculate these.
However these are strongly tied to the total physical memory and memory usage by kernel (varies w/ system devices) and userspace to reach kdump-tools.target and run makedumpfile to completion.
As a follow-up with lengthy discussions from multiple teams. The decision to upgrade the default memory limit is being put on hold indefinitely. Testing was done following the instructions provided here [1].
Unfortunately, the Ubuntu SRU process does not allow for behavior changes once a stable release is out. In this case, this would not only change the current behavior by reducing memory available to the host OS, but raises regression potential which could lead to OOMs during or after the system (re)boot. This might lead to un-bootable systems if it gets it wrong.
- What we've considered:
There is an auto option, which "works", but is somewhat brain dead in its intelligence to get this right.
Dynamically calculating the value appears to be a promising solution but would be subject to its own issues too, including memory footprint changing due to changes in memory allocation dynamics in the network or storage drivers, which would require repeatedly updating the math used to calculate these.
However these are strongly tied to the total physical memory and memory usage by kernel (varies w/ system devices) and userspace to reach kdump-tools.target and run makedumpfile to completion.
Thank you for the summaries @setuid! @mfo!
Cheers,
Heather Lemon | hlemon | hypothetical-lemon
upstream debian values - https:/ /salsa. debian. org/debian/ kdump-tools/ -/blob/ master/ debian/ kdump-tools. grub.default /ubuntu. com/server/ docs/kernel- crash-dump /git.launchpad. net/ubuntu/ +source/ makedumpfile/ commit/ ?h=applied/ ubuntu/ focal-updates& id=62949fcafa23 dbc71003271d889 afbdb441fcb8d
[1] how to crash dump - https:/
previous bump request - https:/