Comment 0 for bug 1297522

Revision history for this message
Holger Mauermann (mauermann) wrote :

After upgrading some virtual machines (KVM) to Trusty I noticed really high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!) read I/O wait time. See attached image. Of course real latency isn't higher than before, it's only /proc/diskstats that shows totally wrong numbers...

$ cat /proc/diskstats | awk '$3=="vda" { print $7/$4, $11/$8 }'
1375.44 13825.1

From the documentation for /proc/diskstats field 4 is total number of reads completed, field 7 is the total time spent reading in milliseconds, and fields 8 and 11 are the same for writes. So above numbers are the average read and write latency in milliseconds.

Same weird numbers with iowait. Note the column "await" (average time in milliseconds for I/O requests):

$ iostat -dx 1 60
Linux 3.13.0-19-generic (munin) 03/25/14 _x86_64_ (2 CPU)

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 2.30 16.75 72.45 24.52 572.79 778.37 27.87 1.57 620.00 450.20 1121.83 1.71 16.54

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 52.00 0.00 25.00 0.00 308.00 24.64 0.30 27813.92 0.00 27813.92 0.48 1.20

I upgraded the host system to Trusty too, however there /proc/diskstats output is normal as before.

$ uname -r
3.13.0-19-generic