Comment 433 for bug 500069

Revision history for this message
In , awebers (awebers-linux-kernel-bugs) wrote :

Hi:

A) if you have multipple harddrives
- they are not equally affected
- if you copy a file (e.g. 7 Gig) from drive A to drive B, a job running on drive C is not slowing down, accept, if perhas a swapfile is used.

A job, in my case, is a vmware virtual machine
I was spreading machines over different harddrives to reduce the trouble.

B) isn't this slowdown a planed action of the system:

About /proc/sys/vm/dirty_ratio
> Note that all processes are blocked for writes when this happens
(see below, original text)
This is what slows everything down.

IMHO, it should be:
If "dirty_ratio" is reached, slow down the job that is creating
so much "dirt" and leave the other ones alone.

cut out from http://www.westnet.com/~gsmith/content/linux-pdflush.htm

8< -------------------

Process page writes
There is another parameter involved though that can spill over into management of user processes:

/proc/sys/vm/dirty_ratio (default 40): Maximum percentage of total memory that can be filled with dirty pages before processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do more writes.

Note that all processes are blocked for writes when this happens, not just the one that filled the write buffers. This can cause what is perceived as an unfair behavior where one "write-hog" process can block all I/O on the system. The classic way to trigger this behavior is to execute a script that does "dd if=/dev/zero of=hog" and watch what happens. See Kernel Korner: I/O Schedulers for examples showing this behavior.

8< -------------------

Reference:
http://www.westnet.com/~gsmith/content/linux-pdflush.htm

Does someone have an idea how to slow down the IO-heavy job (automatically) ?
If the throughput of dd, rsync or "whatever" is reduced, the moment
a triggervalue is reached, the problem would be only for dd, rsync, ...
and not for the rest of the system.