On Ubuntu 16.04 with fio-2.2.10 I cannot reproduce running against the nullblk device. I doubled up the dashes and ran like so:
modprobe null_blk gb=100 cat <<EOF > fio-iolog-file fio version 2 iolog /dev/nullb0 add /dev/nullb0 open /dev/nullb0 read 5898366976 4096 /dev/nullb0 read 72397074432 4096 /dev/nullb0 read 82459553792 4096 /dev/nullb0 read 45965324288 4096 /dev/nullb0 read 39305207808 4096 /dev/nullb0 read 82160799744 4096 /dev/nullb0 read 13947228160 4096 /dev/nullb0 read 9487921152 4096 /dev/nullb0 read 76814397440 4096 /dev/nullb0 close EOF fio --ioengine=sync --name fio.iolog --read_iolog fio-iolog-file fio.iolog: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1 fio-2.2.10 Starting 1 process
fio.iolog: (groupid=0, jobs=1): err= 0: pid=27253: Tue Jul 4 07:03:06 2017 read : io=36864B, bw=36000KB/s, iops=9000, runt= 1msec clat (usec): min=5, max=32, avg= 8.67, stdev= 8.79 lat (usec): min=5, max=32, avg= 8.78, stdev= 8.74 clat percentiles (usec): | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5], | 30.00th=[ 5], 40.00th=[ 5], 50.00th=[ 6], 60.00th=[ 6], | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 32], 95.00th=[ 32], | 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 32], 99.95th=[ 32], | 99.99th=[ 32] lat (usec) : 10=88.89%, 50=11.11% cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=5 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=9/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs): READ: io=36KB, aggrb=36000KB/s, minb=36000KB/s, maxb=36000KB/s, mint=1msec, maxt=1msec
I wonder if this has been fixed in later fio releases...
On Ubuntu 16.04 with fio-2.2.10 I cannot reproduce running against the nullblk device. I doubled up the dashes and ran like so:
modprobe null_blk gb=100 4K-4K/4K- 4K, ioengine=sync, iodepth=1
cat <<EOF > fio-iolog-file
fio version 2 iolog
/dev/nullb0 add
/dev/nullb0 open
/dev/nullb0 read 5898366976 4096
/dev/nullb0 read 72397074432 4096
/dev/nullb0 read 82459553792 4096
/dev/nullb0 read 45965324288 4096
/dev/nullb0 read 39305207808 4096
/dev/nullb0 read 82160799744 4096
/dev/nullb0 read 13947228160 4096
/dev/nullb0 read 9487921152 4096
/dev/nullb0 read 76814397440 4096
/dev/nullb0 close
EOF
fio --ioengine=sync --name fio.iolog --read_iolog fio-iolog-file
fio.iolog: (g=0): rw=read, bs=4K-4K/
fio-2.2.10
Starting 1 process
fio.iolog: (groupid=0, jobs=1): err= 0: pid=27253: Tue Jul 4 07:03:06 2017
read : io=36864B, bw=36000KB/s, iops=9000, runt= 1msec
clat (usec): min=5, max=32, avg= 8.67, stdev= 8.79
lat (usec): min=5, max=32, avg= 8.78, stdev= 8.74
clat percentiles (usec):
| 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5],
| 30.00th=[ 5], 40.00th=[ 5], 50.00th=[ 6], 60.00th=[ 6],
| 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 32], 95.00th=[ 32],
| 99.00th=[ 32], 99.50th=[ 32], 99.90th=[ 32], 99.95th=[ 32],
| 99.99th=[ 32]
lat (usec) : 10=88.89%, 50=11.11%
cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=5
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=9/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=36KB, aggrb=36000KB/s, minb=36000KB/s, maxb=36000KB/s, mint=1msec, maxt=1msec
I wonder if this has been fixed in later fio releases...