On 22 February 2018 at 20:20, Bryan Quigley <email address hidden> wrote:
> @xnox
> "The journald daemon has limits set for logs, meaning they will be rotated and discarded and should not cause out of disk-space errors."
>
> What are they? AFAICT it only has limits on the number of files, but
> not how big they can overall become.
>
>
> I'm also thinking that the duplicate writing of logs could cause other regressions, one example being where high disk throughput is ongoing and many things being written to the logs. Thoughts?
>
The performance impact on disk throughput should not be significant,
as journald still throttles and caches the log messages before
flushing them to disk and still forwards them to rsyslog as it did
before. The performance impact depends on the workload, and there is a
reduction of runtime memory used as well, which helps with throughput
by increasing available io cache buffers.
On 22 February 2018 at 20:20, Bryan Quigley <email address hidden> wrote:
> @xnox
> "The journald daemon has limits set for logs, meaning they will be rotated and discarded and should not cause out of disk-space errors."
>
> What are they? AFAICT it only has limits on the number of files, but
> not how big they can overall become.
>
>
> I'm also thinking that the duplicate writing of logs could cause other regressions, one example being where high disk throughput is ongoing and many things being written to the logs. Thoughts?
>
The performance impact on disk throughput should not be significant,
as journald still throttles and caches the log messages before
flushing them to disk and still forwards them to rsyslog as it did
before. The performance impact depends on the workload, and there is a
reduction of runtime memory used as well, which helps with throughput
by increasing available io cache buffers.
--
Regards,
Dimitri.