I made thousand of disk performance tests and found a lot of unspecific impacts of the performance without reasonable by any other process. Although the following tests are using a LVM vol this is also happen on a "normal" single device
$ hdparm -tT --direct /dev/mapper/VGHOME-LGHOME
/dev/mapper/VGHOME-LGHOME:
Timing O_DIRECT cached reads: 2158 MB in 2.00 seconds = 1078.39 MB/sec
Timing O_DIRECT disk reads: 2118 MB in 3.00 seconds = 705.58 MB/sec
$ hdparm -tT --direct /dev/mapper/VGHOME-LGHOME
/dev/mapper/VGHOME-LGHOME:
Timing O_DIRECT cached reads: 294 MB in 2.03 seconds = 144.49 MB/sec
Timing O_DIRECT disk reads: 178 MB in 3.03 seconds = 58.67 MB/sec
$ hdparm -tT --direct /dev/mapper/VGHOME-LGHOME
/dev/mapper/VGHOME-LGHOME:
Timing O_DIRECT cached reads: 280 MB in 2.01 seconds = 139.10 MB/sec
Timing O_DIRECT disk reads: 602 MB in 3.02 seconds = 199.42 MB/sec
For a long time we assume the virtual environment (VMware vSphere) but in the meanwhile I guess this is a ubuntu problem in special "hardware" environments - paired with larger volumes and a lot of inodes
Same here on 16.04 and 14.04 ...
I made thousand of disk performance tests and found a lot of unspecific impacts of the performance without reasonable by any other process. Although the following tests are using a LVM vol this is also happen on a "normal" single device
$ hdparm -tT --direct /dev/mapper/ VGHOME- LGHOME VGHOME- LGHOME:
/dev/mapper/
Timing O_DIRECT cached reads: 2158 MB in 2.00 seconds = 1078.39 MB/sec
Timing O_DIRECT disk reads: 2118 MB in 3.00 seconds = 705.58 MB/sec
$ hdparm -tT --direct /dev/mapper/ VGHOME- LGHOME VGHOME- LGHOME:
/dev/mapper/
Timing O_DIRECT cached reads: 294 MB in 2.03 seconds = 144.49 MB/sec
Timing O_DIRECT disk reads: 178 MB in 3.03 seconds = 58.67 MB/sec
$ hdparm -tT --direct /dev/mapper/ VGHOME- LGHOME VGHOME- LGHOME:
/dev/mapper/
Timing O_DIRECT cached reads: 280 MB in 2.01 seconds = 139.10 MB/sec
Timing O_DIRECT disk reads: 602 MB in 3.02 seconds = 199.42 MB/sec
For a long time we assume the virtual environment (VMware vSphere) but in the meanwhile I guess this is a ubuntu problem in special "hardware" environments - paired with larger volumes and a lot of inodes