Other MOTD plugins use caching, for example /usr/lib/update-notifier/update-motd-reboot-required does:
if [ -f /var/run/reboot-required ]; then
cat /var/run/reboot-required
fi
Actually /usr/lib/update-notifier/update-motd-updates-available already uses "caching" like the others.
It stores its result in:
/var/lib/update-notifier/updates-available
This is what the call to "/usr/lib/update-notifier/apt-check --human-readable" writes into
But it executes slow commands like find apt-check synchronously.
Yes a cron job might be nice, but it is rather "invasive" in changing the general behaviour maybe more than required.
Why not just executing the "slow parts" like find and apt-check asynchronously.
One can still evolve that approach later on an move (instead or additionally) the async part into a cron job if required.
It comes at "the risk" of having one outdated info on login.
But since updates don't change every second anyway it is probably better than waiting on each login.
Especially with all the examples made like scp tabbing, slow network and so on.
The atomicity issue with updating the temporary file already exists in todays code with regards to concurrent logins. But we can improve that as well by using a temporary file and mv.
The following are suggestions to fix it in trusty, vivid, wily and upstream.
Other MOTD plugins use caching, for example /usr/lib/ update- notifier/ update- motd-reboot- required does: reboot- required ]; then reboot- required
if [ -f /var/run/
cat /var/run/
fi
Actually /usr/lib/ update- notifier/ update- motd-updates- available already uses "caching" like the others. lib/update- notifier/ updates- available update- notifier/ apt-check --human-readable" writes into
It stores its result in:
/var/
This is what the call to "/usr/lib/
But it executes slow commands like find apt-check synchronously.
Yes a cron job might be nice, but it is rather "invasive" in changing the general behaviour maybe more than required.
Why not just executing the "slow parts" like find and apt-check asynchronously.
One can still evolve that approach later on an move (instead or additionally) the async part into a cron job if required.
It comes at "the risk" of having one outdated info on login.
But since updates don't change every second anyway it is probably better than waiting on each login.
Especially with all the examples made like scp tabbing, slow network and so on.
The atomicity issue with updating the temporary file already exists in todays code with regards to concurrent logins. But we can improve that as well by using a temporary file and mv.
The following are suggestions to fix it in trusty, vivid, wily and upstream.