@xnox
"The journald daemon has limits set for logs, meaning they will be rotated and discarded and should not cause out of disk-space errors."
What are they? AFAICT it only has limits on the number of files, but not how big they can overall become.
I'm also thinking that the duplicate writing of logs could cause other regressions, one example being where high disk throughput is ongoing and many things being written to the logs. Thoughts?
@xnox
"The journald daemon has limits set for logs, meaning they will be rotated and discarded and should not cause out of disk-space errors."
What are they? AFAICT it only has limits on the number of files, but not how big they can overall become.
I'm also thinking that the duplicate writing of logs could cause other regressions, one example being where high disk throughput is ongoing and many things being written to the logs. Thoughts?