Comment 372 for bug 620074

Revision history for this message
In , perlover (perlover-linux-kernel-bugs) wrote :

Confirm bug

My OS is Fedora Core release 6
Kernel: 2.6.22.14-72.fc6
2 CPUs: Intel® Xeon® CPU 5130 @ 2.00GHz
HDDs: SAS 3.0 Gb/s, FUJITSU
RAID: Adaptec 4800SAS
RAID10

How to test:
# dd if=/dev/zero of=testfile.1gb bs=1M count=1000

In other terminal during a copying you should run:
# vmstat 1

I see for example:
r b swpd free buff cache si so bi bo in cs us sy id wa st
14 8 460 120716 280236 1509844 0 0 9 14 0 0 9 3 66 22
0
0 13 468 121936 279216 1550936 0 0 1368 47776 1927 4153 24 8 8 60
0
0 15 468 121516 280200 1551200 0 0 1408 3744 1726 2846 1 2 3 94
0
0 8 468 129804 280520 1545940 0 0 1612 4280 1854 4060 3 2 1 95
0
0 6 468 131388 281868 1546628 0 0 2140 3620 2020 4650 12 3 13 71
0
0 17 468 114220 282792 1571864 0 0 1208 3212 1647 2715 4 3 6 87
0
1 12 468 115356 283164 1570704 0 0 1420 18964 1718 2397 2 2 2 94
0
0 9 468 114320 283628 1570868 0 0 768 1204 1753 2831 3 1 0 96

iowait -> 80-90% during 'dd'
All other CPU's task work very very slow ...

AND (!!!), the output of 'dd' is:
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 112.086 seconds, 9.4 MB/s
                                                   ^^^^^^^^^

During some years i see a following behaviour: if server often uses a harddisk (for testing: 'dd' examples here) then iowait is stability 50-90% and many tasks are frozen during some seconds (10-20 and may be more at me). It's easy for testing through 'dd'. I cannot resolve this trouble by ionice for example - iowait is high even if i do a some i/o tasks ionice -c3 or ionice -c2 -n7 for example! So each server under kernel 2.6.18 and more (i read many topics) has this bug. A people in forums write that the kernel of 2.6.30-rc2 has bug too and that FreeBSD work quickly (mouse moving, video showing and some other CPU's tasks) during 'dd' testing unlike Linux ...

I don't know what arguments do you want for finding this bug! This bug to be since 2007 year ...

Please help! Here examples of my loaded server in some times (not DD - there only typical Mysql database & Mysql tasks & apache tasks):

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
-
 r b swpd free buff cache si so bi bo in cs us sy id wa st
13 14 120 68460 574784 1286748 0 0 13 1 0 0 9 3 66 22
0
 1 11 120 74564 576080 1286976 0 0 1560 0 1632 3641 34 10 0 57
0
 0 12 120 69988 577572 1287352 0 0 1904 0 1969 3696 5 2 0 93
0
 0 11 120 66916 578984 1287860 0 0 1900 0 1809 3615 6 2 0 92
0
 0 11 120 64960 580424 1288028 0 0 1668 0 1642 2188 1 1 0 97
0
 0 11 120 72764 576508 1286788 0 0 1668 0 1681 2198 3 2 0 96
0
 1 11 120 71424 577940 1287300 0 0 1604 332 1575 2152 2 1 0 97
0
 3 11 120 58852 579528 1289100 0 0 2000 0 1984 3286 44 7 0 49
0
 1 11 120 75104 581012 1287472 0 0 1608 0 2119 2839 39 7 0 55
0
 0 13 120 72160 582572 1287672 0 0 1908 120 1645 2366 7 1 0 92
0

[root@63 logs]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
-
 r b swpd free buff cache si so bi bo in cs us sy id wa st
 5 9 120 95540 570248 1276840 0 0 13 1 0 0 9 3 66 22
0
 1 7 120 93996 571428 1277440 0 0 1772 33712 2024 4341 28 4 11 57
0
 0 7 120 97980 572528 1277884 0 0 1444 300 1568 2339 13 1 17 70
0
 0 7 120 99900 573532 1278468 0 0 1504 0 1513 2364 4 2 3 90
0
 1 5 120 98656 574484 1278540 0 0 1052 400 1629 1924 2 1 0 97
0
 1 3 120 97924 574932 1278916 0 0 480 21108 2276 1987 11 2 47 40
0
 1 4 120 87280 575264 1279040 0 0 432 3676 2456 2654 23 2 40 35
0
 1 5 120 95856 575668 1279140 0 0 780 4128 2249 3097 26 2 25 47
0

Here you can see stability high 'wa' field. When my tasks frozen during 10-20 seconds i see there 80-90% 'wa'.

Please can catch this bug!

Thanks