The strange thing about every high throughput io is that *every* byte of memory is used up intil a certain limit. That use of memory will even swap out stuff.
Also looking at especially stress_noswap_nohang.vmstat the behavior mimics this.
1. Place data to be written into memory
2. Write some data to the disk
3. goto 1 if not all allowed memory is used.
Interesting is that "stress -d 1" places data into memory a lot faster than a normal hard disk can handle. So the memory will be filled up eventually (the limit will be reached eventually).
So for me I only have a hanging system when "stress -d 1" writes compete with "swap out" - which is actually caused by "stress -d 1" filling the memory.
So the big question: Why do the kernel allow large data writes to fill up the memory and even swap out stuff just to get data to be written into memory?
The strange thing about every high throughput io is that *every* byte of memory is used up intil a certain limit. That use of memory will even swap out stuff.
Also looking at especially stress_ noswap_ nohang. vmstat the behavior mimics this.
1. Place data to be written into memory
2. Write some data to the disk
3. goto 1 if not all allowed memory is used.
Interesting is that "stress -d 1" places data into memory a lot faster than a normal hard disk can handle. So the memory will be filled up eventually (the limit will be reached eventually).
So for me I only have a hanging system when "stress -d 1" writes compete with "swap out" - which is actually caused by "stress -d 1" filling the memory.
So the big question: Why do the kernel allow large data writes to fill up the memory and even swap out stuff just to get data to be written into memory?