Comment 0 for bug 1869958

Revision history for this message
Chris Sanders (chris.sanders) wrote :

Fio testing for attached drives is expected from a user standpoint to check if the drives are performing to spec. From my experience, if you're lucky, the vendor will provide max bandwith and IOPS for the device.

Using the MAAS fio tests I was surprised to find that a set of new machines were severely under performing in disk throughput tests. After reviewing the test, I now see that *all* fio tests are using 4k block sizes to test. These are not the way drives are specified or tested for bandwith maximums. Here's a direct example of the difference vs a 4M block size.

Mass Results: READ: bw=628MiB/s (658MB/s)
My own fio: READ: bw=1080MiB/s (1132MB/s)

The MAAS results seemed to imply the drives were not meeting spec, but running the test myself showed they were operating as expected and matched the vendor specified values.

The recommendation I have is the following.
 * Test IOPS with 4k randread and randwrite
 * Test BW with 4M read and write

You could achieve this with something like this:
Read IOPS: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4k --iodepth=64 --size=10G --filename=$DRIVEPATH --readwrite=randread
Write IOPS: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4k --iodepth=64 --size=10G --filename=$DRIVEPATH --readwrite=randwrite
Read BW: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4M --iodepth=64 --size=100G --filename=$DRIVEPATH --readwrite=read
Write BW: sudo -n fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4M --iodepth=64 --size=100G --filename=$DRIVEPATH --readwrite=write

With this you would get 4k sized random read/write IOPS and 4M sequential maximum read/write Bandwidth. These align with the maximum specifications and provide a much better view if a drive is operating within spec or not.