Comment 489 for bug 620074

Revision history for this message
In , funtoos (funtoos-linux-kernel-bugs) wrote :

> I would think the easiest and most reliable solution to this problem would be
> for the kernel to prefer fulfilling page-in requests ahead of dirtying
> blocks.
> If there are any requests to read pages in from disk to satisfy page faults,
> those requests should be fulfilled and a process's request to dirty a new
> page
> should be blocked. In other words, as dirty blocks are flushed to disk, thus
> freeing up RAM, the process performing the huge write shouldn't be allowed to
> dirty another block (thus consuming that freed RAM) if there are page-ins
> waiting to be fulfilled.

Matt: Wouldn't setting dirty_bytes to low values make sure that the processes never dirty more than a fixed number of pages, and hence never get to consume more RAM until their existing dirty pages are flushed? Or may that's not how dirty_*bytes is designed to work. May be (I am guessing here) it just controls when the flush begins to happen for dirty pages, the application can still continue to dirty more pages. But if dirty_bytes controls when the process itself has to flush its dirty buffers, then it would be busy flushing and waiting on IO to complete and can't be dirtying more memory, right? So, it does look like setting dirty_bytes to a low value like 4096 will produce an extreme case where the process writes are almost completely sync and page cache is not pounded at all.

Can someone try this extreme test? set dirty_bytes to 4096 and rerun your scenario. The sequential bandwidth seen by the disk stresser will go down the drain but your system should survive.