procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 2 355844 427256 3508 67544 10 21 315 180 459 781 5 3 80 12
then after doing a 10gig dd-operation vmstat says
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 355872 24532 8656 457200 10 21 338 497 456 763 5 3 79 13
So if I read the numbers correct around 400 Mb of memory has now been used for caches. Hmm that doesn't match setting dirty_background_ratio and dirty_ratio to 1. Since I have 1G of memory only 1% (10 Mb) should be allowed to be dirty before forcing applications to wait. But this is apparently not the cause here.
echo "1" > dirty_backgroun d_ratio
echo "1" > dirty_ratio
echo "3" > drop_caches
and vmstat says
procs ------- ----memory- ------- -- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 2 355844 427256 3508 67544 10 21 315 180 459 781 5 3 80 12
then after doing a 10gig dd-operation vmstat says
procs ------- ----memory- ------- -- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 355872 24532 8656 457200 10 21 338 497 456 763 5 3 79 13
So if I read the numbers correct around 400 Mb of memory has now been used for caches. Hmm that doesn't match setting dirty_backgroun d_ratio and dirty_ratio to 1. Since I have 1G of memory only 1% (10 Mb) should be allowed to be dirty before forcing applications to wait. But this is apparently not the cause here.