Activity log for bug #1869958
Date | Who | What changed | Old value | New value | Message |
---|---|---|---|---|---|
2020-03-31 20:31:43 | Chris Sanders | bug | added bug | ||
2020-03-31 20:31:56 | Chris Sanders | description | Fio testing for attached drives is expected from a user standpoint to check if the drives are performing to spec. From my experience, if you're lucky, the vendor will provide max bandwith and IOPS for the device. Using the MAAS fio tests I was surprised to find that a set of new machines were severely under performing in disk throughput tests. After reviewing the test, I now see that *all* fio tests are using 4k block sizes to test. These are not the way drives are specified or tested for bandwith maximums. Here's a direct example of the difference vs a 4M block size. Mass Results: READ: bw=628MiB/s (658MB/s) My own fio: READ: bw=1080MiB/s (1132MB/s) The MAAS results seemed to imply the drives were not meeting spec, but running the test myself showed they were operating as expected and matched the vendor specified values. The recommendation I have is the following. * Test IOPS with 4k randread and randwrite * Test BW with 4M read and write You could achieve this with something like this: Read IOPS: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4k --iodepth=64 --size=10G --filename=$DRIVEPATH --readwrite=randread Write IOPS: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4k --iodepth=64 --size=10G --filename=$DRIVEPATH --readwrite=randwrite Read BW: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4M --iodepth=64 --size=100G --filename=$DRIVEPATH --readwrite=read Write BW: sudo -n fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4M --iodepth=64 --size=100G --filename=$DRIVEPATH --readwrite=write With this you would get 4k sized random read/write IOPS and 4M sequential maximum read/write Bandwidth. These align with the maximum specifications and provide a much better view if a drive is operating within spec or not. | Fio testing for attached drives is expected from a user standpoint to check if the drives are performing to spec. From my experience, if you're lucky, the vendor will provide max bandwith and IOPS for the device. Using the MAAS fio tests I was surprised to find that a set of new machines were severely under performing in disk throughput tests. After reviewing the test, I now see that *all* fio tests are using 4k block sizes to test. These are not the way drives are specified or tested for bandwith maximums. Here's a direct example of the difference vs a 4M block size. Mass Results: READ: bw=628MiB/s (658MB/s) My own fio: READ: bw=1080MiB/s (1132MB/s) The MAAS results seemed to imply the drives were not meeting spec, but running the test myself showed they were operating as expected and matched the vendor specified values. The recommendation I have is the following. * Test IOPS with 4k randread and randwrite * Test BW with 4M read and write You could achieve this with something like this: Read IOPS: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4k --iodepth=64 --size=10G --filename=$DRIVEPATH --readwrite=randread Write IOPS: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4k --iodepth=64 --size=10G --filename=$DRIVEPATH --readwrite=randwrite Read BW: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4M --iodepth=64 --size=100G --filename=$DRIVEPATH --readwrite=read Write BW: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fio_test --bs=4M --iodepth=64 --size=100G --filename=$DRIVEPATH --readwrite=write With this you would get 4k sized random read/write IOPS and 4M sequential maximum read/write Bandwidth. These align with the maximum specifications and provide a much better view if a drive is operating within spec or not. | |
2020-03-31 20:32:52 | Chris Sanders | bug | added subscriber Canonical IS BootStack | ||
2020-03-31 20:36:28 | Chris Sanders | bug | added subscriber Canonical Field Medium | ||
2020-04-07 07:11:25 | Alberto Donato | maas: importance | Undecided | Medium | |
2020-04-07 07:11:27 | Alberto Donato | maas: status | New | Triaged | |
2020-04-17 14:54:34 | Zachary Zehring | attachment added | storage-benchmark-big-block-size.py https://bugs.launchpad.net/maas/+bug/1869958/+attachment/5356198/+files/storage-benchmark-big-block-size.py | ||
2020-05-11 19:38:39 | Lee Trager | maas: assignee | Lee Trager (ltrager) | ||
2020-05-11 19:38:47 | Lee Trager | nominated for series | maas/2.7 | ||
2020-05-11 19:38:47 | Lee Trager | bug task added | maas/2.7 | ||
2020-05-11 19:38:57 | Lee Trager | maas: status | Triaged | In Progress | |
2020-05-11 19:39:05 | Lee Trager | maas/2.7: status | New | In Progress | |
2020-05-11 19:39:13 | Lee Trager | maas/2.7: assignee | Lee Trager (ltrager) | ||
2020-05-11 19:39:16 | Lee Trager | maas/2.7: milestone | 2.7.1rc1 | ||
2020-05-11 19:39:39 | Launchpad Janitor | merge proposal linked | https://code.launchpad.net/~ltrager/maas/+git/maas/+merge/383741 | ||
2020-05-12 18:48:45 | MAAS Lander | maas: status | In Progress | Fix Committed | |
2020-05-12 18:48:45 | MAAS Lander | maas: milestone | 2.8.0rc1 | ||
2020-05-15 22:59:00 | Launchpad Janitor | merge proposal linked | https://code.launchpad.net/~ltrager/maas/+git/maas/+merge/384055 | ||
2020-05-15 23:00:05 | Lee Trager | nominated for series | maas/2.3 | ||
2020-05-15 23:00:05 | Lee Trager | bug task added | maas/2.3 | ||
2020-05-15 23:00:05 | Lee Trager | nominated for series | maas/2.6 | ||
2020-05-15 23:00:05 | Lee Trager | bug task added | maas/2.6 | ||
2020-05-15 23:00:05 | Lee Trager | nominated for series | maas/2.4 | ||
2020-05-15 23:00:05 | Lee Trager | bug task added | maas/2.4 | ||
2020-05-15 23:00:13 | Lee Trager | maas/2.3: status | New | In Progress | |
2020-05-15 23:00:15 | Lee Trager | maas/2.4: status | New | Incomplete | |
2020-05-15 23:00:18 | Lee Trager | maas/2.4: status | Incomplete | In Progress | |
2020-05-15 23:00:21 | Lee Trager | maas/2.6: status | New | In Progress | |
2020-05-15 23:00:24 | Lee Trager | maas/2.3: importance | Undecided | Medium | |
2020-05-15 23:00:26 | Lee Trager | maas/2.4: importance | Undecided | Medium | |
2020-05-15 23:00:27 | Lee Trager | maas/2.6: importance | Undecided | Medium | |
2020-05-15 23:00:29 | Lee Trager | maas/2.7: importance | Undecided | Medium | |
2020-05-15 23:00:32 | Lee Trager | maas/2.6: assignee | Lee Trager (ltrager) | ||
2020-05-15 23:00:34 | Lee Trager | maas/2.4: assignee | Lee Trager (ltrager) | ||
2020-05-15 23:00:36 | Lee Trager | maas/2.3: assignee | Lee Trager (ltrager) | ||
2020-05-15 23:03:26 | Launchpad Janitor | merge proposal linked | https://code.launchpad.net/~ltrager/maas/+git/maas/+merge/384056 | ||
2020-05-15 23:21:53 | MAAS Lander | maas/2.7: status | In Progress | Fix Committed | |
2020-05-15 23:50:21 | MAAS Lander | maas/2.6: status | In Progress | Fix Committed | |
2020-05-15 23:50:21 | MAAS Lander | maas/2.6: milestone | next | ||
2020-05-16 00:19:45 | Launchpad Janitor | merge proposal linked | https://code.launchpad.net/~ltrager/maas/+git/maas/+merge/384057 | ||
2020-05-16 00:21:44 | Lee Trager | bug task deleted | maas/2.3 | ||
2020-05-16 00:56:54 | MAAS Lander | maas/2.4: status | In Progress | Fix Committed | |
2020-05-16 00:56:54 | MAAS Lander | maas/2.4: milestone | next | ||
2020-06-04 12:41:09 | Alberto Donato | maas: status | Fix Committed | Fix Released | |
2020-07-21 07:13:05 | Alberto Donato | maas/2.7: status | Fix Committed | Fix Released | |
2021-08-24 09:57:43 | Björn Tillenius | maas/2.6: status | Fix Committed | Fix Released | |
2021-08-24 09:57:43 | Björn Tillenius | maas/2.6: milestone | next | ||
2021-08-24 09:57:44 | Björn Tillenius | maas/2.4: status | Fix Committed | Fix Released | |
2021-08-24 09:57:44 | Björn Tillenius | maas/2.4: milestone | next |