use fio to test this disk and sdX disk add 3 new partitioning

Bug #1943304 reported by Fred Kimmy
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kunpeng920
Won't Fix
Low
Unassigned
Ubuntu-20.04-hwe
Won't Fix
Low
Unassigned

Bug Description

[Bug Description]
use fio to test this disk and sdX disk add 3 new partitioning

[Steps to Reproduce]
1)fio --ioengine=libaio --bs=4K --rw=write --ramp_time=5 --runtime=30 --direct=1 --group_reporting=1 --numjobs=1 --iodepth=128 --time_based --name=job1 --filename=/dev/sda --name=job2 --filename=/dev/sdb --name=job3 --filename=/dev/sdc --name=job4 --filename=/dev/sdd --name=job5 --filename=/dev/sde --name=job6 --filename=/dev/sdf --name=job7 --filename=/dev/sdg --name=job8 --filename=/dev/sdh --name=job9 --filename=/dev/sdi --name=job10 --filename=/dev/sdj --name=job11 --filename=/dev/sdk --name=job12 --filename=/dev/sdl
2)
3)

[Actual Results]
root@root:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 48.9M 1 loop /snap/core18/1949
loop1 7:1 0 61.7M 1 loop /snap/lxd/19206
loop2 7:2 0 27M 1 loop /snap/snapd/10709
sda 8:0 0 3.7T 0 disk
sdb 8:16 0 3.7T 0 disk
sdc 8:32 0 1.5T 0 disk
sdd 8:48 0 1.5T 0 disk
sde 8:64 0 1.5T 0 disk
sdf 8:80 0 1.5T 0 disk
sdg 8:96 0 1.5T 0 disk
sdh 8:112 0 1.5T 0 disk
sdi 8:128 0 1.5T 0 disk
sdj 8:144 0 1.5T 0 disk
sdk 8:160 0 1.5T 0 disk
sdl 8:176 0 1.5T 0 disk
nvme0n1 259:0 0 2.9T 0 disk
nvme1n1 259:1 0 5.8T 0 disk
├─nvme1n1p1 259:2 0 512M 0 part /boot/efi
├─nvme1n1p2 259:3 0 500M 0 part /boot
├─nvme1n1p3 259:4 0 3.9G 0 part [SWAP]
└─nvme1n1p4 259:5 0 5.8T 0 part /
root@root:~# fio --ioengine=libaio --bs=4K --rw=write --ramp_time=5 --runtime=30 --direct=1 --group_reporting=1 --numjobs=1 --iodepth=128 --time_based --name=job1 --filename=/dev/sda --name=job2 --filename=/dev/sdb --name=job3 --filename=/dev/sdc --name=job4 --filename=/dev/sdd --name=job5 --filename=/dev/sde --name=job6 --filename=/dev/sdf --name=job7 --filename=/dev/sdg --name=job8 --filename=/dev/sdh --name=job9 --filename=/dev/sdi --name=job10 --filename=/dev/sdj --name=job11 --filename=/dev/sdk --name=job12 --filename=/dev/sdl
job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job4: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job5: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job6: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job7: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job8: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job9: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job10: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job11: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job12: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.16
Starting 12 processes
Jobs: 12 (f=5): [W(2),f(2),W(1),f(3),W(1),f(3)][100.0%][w=4293MiB/s][w=1099k IOPS][eta 00m:00s]
job1: (groupid=0, jobs=12): err= 0: pid=28722: Wed Jun 30 15:26:21 2021
  write: IOPS=1133k, BW=4428MiB/s (4643MB/s)(130GiB/30003msec); 0 zone resets
    slat (nsec): min=1410, max=447070, avg=7126.98, stdev=3734.95
    clat (usec): min=18, max=12132, avg=1347.03, stdev=464.01
     lat (usec): min=35, max=12134, avg=1354.31, stdev=463.48
    clat percentiles (usec):
     | 1.00th=[ 461], 5.00th=[ 611], 10.00th=[ 791], 20.00th=[ 1221],
     | 30.00th=[ 1254], 40.00th=[ 1287], 50.00th=[ 1319], 60.00th=[ 1352],
     | 70.00th=[ 1385], 80.00th=[ 1418], 90.00th=[ 1483], 95.00th=[ 2311],
     | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3032], 99.95th=[ 3064],
     | 99.99th=[11207]
   bw ( MiB/s): min= 3958, max= 6033, per=100.00%, avg=4430.76, stdev=56.73, samples=712
   iops : min=1013494, max=1544664, avg=1134274.69, stdev=14524.00, samples=712
  lat (usec) : 20=0.01%, 50=0.01%, 100=0.03%, 250=0.08%, 500=2.10%
  lat (usec) : 750=7.22%, 1000=2.32%
  lat (msec) : 2=79.27%, 4=8.96%, 10=0.01%, 20=0.02%
  cpu : usr=12.55%, sys=68.01%, ctx=2231557, majf=0, minf=4101
  IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=0,34006329,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=4428MiB/s (4643MB/s), 4428MiB/s-4428MiB/s (4643MB/s-4643MB/s), io=130GiB (139GB), run=30003-30003msec

Disk stats (read/write):
  sda: ios=97/462821, merge=0/1313148, ticks=144/1147040, in_queue=1147185, util=99.18%
  sdb: ios=49/459504, merge=0/1309083, ticks=237/1148685, in_queue=1148922, util=99.29%
  sdc: ios=102/3263949, merge=0/20427, ticks=17/159267, in_queue=159285, util=99.01%
  sdd: ios=102/3194397, merge=0/118929, ticks=16/230889, in_queue=230905, util=99.06%
  sde: ios=102/3206895, merge=0/37942, ticks=19/214630, in_queue=214650, util=99.14%
  sdf: ios=102/3292284, merge=0/85977, ticks=21/266151, in_queue=266173, util=99.14%
  sdg: ios=102/3505574, merge=0/353194, ticks=21/449764, in_queue=449786, util=99.22%
  sdh: ios=92/3247201, merge=0/174121, ticks=27/268314, in_queue=268342, util=99.42%
  sdi: ios=92/2770423, merge=0/1703955, ticks=22/862250, in_queue=862272, util=99.46%
  sdj: ios=92/3157405, merge=0/576692, ticks=20/373061, in_queue=373082, util=99.63%
  sdk: ios=46/3483395, merge=0/20880, ticks=12/158566, in_queue=158579, util=99.67%
  sdl: ios=0/3489166, merge=0/191099, ticks=0/276216, in_queue=276216, util=99.76%
root@root:~# dmesg
[ 2833.606231] sdc: AHDI sdc1 sdc2 sdc3
root@root:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 48.9M 1 loop /snap/core18/1949
loop1 7:1 0 61.7M 1 loop /snap/lxd/19206
loop2 7:2 0 27M 1 loop /snap/snapd/10709
sda 8:0 0 3.7T 0 disk
sdb 8:16 0 3.7T 0 disk
sdc 8:32 0 1.5T 0 disk
├─sdc1 8:33 0 648.3G 0 part
├─sdc2 8:34 0 28G 0 part
└─sdc3 8:35 0 577G 0 part
sdd 8:48 0 1.5T 0 disk
sde 8:64 0 1.5T 0 disk
sdf 8:80 0 1.5T 0 disk
sdg 8:96 0 1.5T 0 disk
sdh 8:112 0 1.5T 0 disk
sdi 8:128 0 1.5T 0 disk
sdj 8:144 0 1.5T 0 disk
sdk 8:160 0 1.5T 0 disk
sdl 8:176 0 1.5T 0 disk
nvme0n1 259:0 0 2.9T 0 disk
nvme1n1 259:1 0 5.8T 0 disk
├─nvme1n1p1 259:2 0 512M 0 part /boot/efi
├─nvme1n1p2 259:3 0 500M 0 part /boot
├─nvme1n1p3 259:4 0 3.9G 0 part [SWAP]
└─nvme1n1p4 259:5 0 5.8T 0 part /
root@root:~#
root@root:/var/log#
root@root:/var/log#
root@root:/var/log#
root@root:/var/log#
root@root:/var/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 48.9M 1 loop /snap/core18/1949
loop1 7:1 0 61.7M 1 loop /snap/lxd/19206
loop2 7:2 0 27M 1 loop /snap/snapd/10709
sda 8:0 0 3.7T 0 disk
sdb 8:16 0 3.7T 0 disk
sdc 8:32 0 1.5T 0 disk
├─sdc1 8:33 0 648.3G 0 part
├─sdc2 8:34 0 28G 0 part
└─sdc3 8:35 0 577G 0 part
sdd 8:48 0 1.5T 0 disk
sde 8:64 0 1.5T 0 disk
sdf 8:80 0 1.5T 0 disk
sdg 8:96 0 1.5T 0 disk
sdh 8:112 0 1.5T 0 disk
sdi 8:128 0 1.5T 0 disk
sdj 8:144 0 1.5T 0 disk
sdk 8:160 0 1.5T 0 disk
sdl 8:176 0 1.5T 0 disk
nvme0n1 259:0 0 2.9T 0 disk
nvme1n1 259:1 0 5.8T 0 disk
├─nvme1n1p1 259:2 0 512M 0 part /boot/efi
├─nvme1n1p2 259:3 0 500M 0 part /boot
├─nvme1n1p3 259:4 0 3.9G 0 part [SWAP]
└─nvme1n1p4 259:5 0 5.8T 0 part /
root@root:/var/log# fdisk /dev/sdc

Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The old atari signature will be removed by a write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x4bc88dde.

Command (m for help): d
No partition is defined yet!

Command (m for help): n
Partition type
   p primary (0 primary, 0 extended, 4 free)
   e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-3125627567, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-3125627567, default 3125627567):

Created a new partition 1 of type 'Linux' and of size 1.5 TiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@root:/var/log# fio --ioengine=libaio --bs=4K --rw=write --ramp_time=5 --runtime=30 --direct=1 --group_reporting=1 --numjobs=1 --iodepth=128 --time_based --name=job1 --filename=/dev/sda --name=job2 --filename=/dev/sdb --name=job3 --filename=/dev/sdc --name=job4 --filename=/dev/sdd --name=job5 --filename=/dev/sde --name=job6 --filename=/dev/sdf --name=job7 --filename=/dev/sdg --name=job8 --filename=/dev/sdh --name=job9 --filename=/dev/sdi --name=job10 --filename=/dev/sdj --name=job11 --filename=/dev/sdk --name=job12 --filename=/dev/sdl
job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job4: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job5: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job6: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job7: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job8: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job9: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job10: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job11: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
job12: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.16
Starting 12 processes
Jobs: 12 (f=11): [W(7),f(1),W(4)][100.0%][w=4716MiB/s][w=1207k IOPS][eta 00m:00s]
job1: (groupid=0, jobs=12): err= 0: pid=29501: Wed Jun 30 15:40:04 2021
  write: IOPS=1199k, BW=4682MiB/s (4909MB/s)(137GiB/30003msec); 0 zone resets
    slat (nsec): min=1370, max=402200, avg=6595.88, stdev=3632.93
    clat (usec): min=17, max=27205, avg=1273.99, stdev=504.25
     lat (usec): min=36, max=27207, avg=1280.73, stdev=504.22
    clat percentiles (usec):
     | 1.00th=[ 424], 5.00th=[ 578], 10.00th=[ 652], 20.00th=[ 799],
     | 30.00th=[ 1221], 40.00th=[ 1254], 50.00th=[ 1287], 60.00th=[ 1336],
     | 70.00th=[ 1369], 80.00th=[ 1418], 90.00th=[ 1516], 95.00th=[ 2311],
     | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3032], 99.95th=[ 3097],
     | 99.99th=[11207]
   bw ( MiB/s): min= 4024, max= 6549, per=100.00%, avg=4687.88, stdev=67.95, samples=712
   iops : min=1030346, max=1676566, avg=1200095.88, stdev=17395.91, samples=712
  lat (usec) : 20=0.01%, 50=0.01%, 100=0.07%, 250=0.04%, 500=2.71%
  lat (usec) : 750=12.93%, 1000=7.75%
  lat (msec) : 2=68.00%, 4=8.46%, 10=0.01%, 20=0.02%, 50=0.01%
  cpu : usr=12.91%, sys=66.87%, ctx=2277504, majf=0, minf=3532
  IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=0,35959866,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=4682MiB/s (4909MB/s), 4682MiB/s-4682MiB/s (4909MB/s-4909MB/s), io=137GiB (147GB), run=30003-30003msec

Disk stats (read/write):
  sda: ios=46/460501, merge=0/1320856, ticks=131/1151175, in_queue=1151306, util=99.11%
  sdb: ios=49/457915, merge=0/1320303, ticks=227/1153294, in_queue=1153520, util=99.25%
  sdc: ios=101/3213000, merge=0/23259, ticks=18/141661, in_queue=141680, util=99.02%
  sdd: ios=46/3221001, merge=0/39097, ticks=6/155499, in_queue=155505, util=99.02%
  sde: ios=46/3309793, merge=0/603613, ticks=6/422147, in_queue=422152, util=99.12%
  sdf: ios=46/3227918, merge=0/338632, ticks=6/346982, in_queue=346988, util=99.12%
  sdg: ios=46/3432633, merge=0/40878, ticks=6/155861, in_queue=155867, util=99.20%
  sdh: ios=0/3495196, merge=0/39021, ticks=0/161502, in_queue=161501, util=99.49%
  sdi: ios=0/3281372, merge=0/645373, ticks=0/466783, in_queue=466783, util=99.54%
  sdj: ios=0/2409202, merge=0/2421249, ticks=0/947723, in_queue=947723, util=99.64%
  sdk: ios=0/4287506, merge=0/1003590, ticks=0/1020341, in_queue=1020340, util=99.68%
  sdl: ios=0/3534138, merge=0/39219, ticks=0/160944, in_queue=160944, util=99.77%
root@root:/var/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 48.9M 1 loop /snap/core18/1949
loop1 7:1 0 61.7M 1 loop /snap/lxd/19206
loop2 7:2 0 27M 1 loop /snap/snapd/10709
sda 8:0 0 3.7T 0 disk
sdb 8:16 0 3.7T 0 disk
sdc 8:32 0 1.5T 0 disk
├─sdc1 8:33 0 648.3G 0 part
├─sdc2 8:34 0 28G 0 part
└─sdc3 8:35 0 577G 0 part
sdd 8:48 0 1.5T 0 disk
sde 8:64 0 1.5T 0 disk
sdf 8:80 0 1.5T 0 disk
sdg 8:96 0 1.5T 0 disk
sdh 8:112 0 1.5T 0 disk
sdi 8:128 0 1.5T 0 disk
sdj 8:144 0 1.5T 0 disk
sdk 8:160 0 1.5T 0 disk
sdl 8:176 0 1.5T 0 disk
nvme0n1 259:0 0 2.9T 0 disk
nvme1n1 259:1 0 5.8T 0 disk
├─nvme1n1p1 259:2 0 512M 0 part /boot/efi
├─nvme1n1p2 259:3 0 500M 0 part /boot
├─nvme1n1p3 259:4 0 3.9G 0 part [SWAP]
└─nvme1n1p4 259:5 0 5.8T 0 part /
root@root:/var/log# dmesg
[ 2833.606231] sdc: AHDI sdc1 sdc2 sdc3
[ 3550.965095] sdc: AHDI sdc1 sdc2 sdc3
[ 3565.665782] sdc: AHDI sdc1 sdc2 sdc3
[ 3588.200505] sdc: AHDI sdc1 sdc2 sdc3
[ 3607.299839] sdc: sdc1
[ 3621.339356] sdc: sdc1
[ 3656.772022] sdc: AHDI sdc1 sdc2 sdc3
root@root:/var/log#

[Expected Results]
ok

[Reproducibility]
100%
[Additional information]
(Firmware version, kernel version, affected hardware, etc. if required):
OS: ubuntu 20.04.2
DRV(driver version): vermagic: 5.8.0-41-generic SMP mod_unload aarch64
[Resolution]

fio 2.10 version have not this bug, fio 3.16 version cause this bug。

Revision history for this message
Ike Panhc (ikepanhc) wrote :

fio is 3.16 in focal. I will try to reproduce and see if this issue is still reproduce-able with later version.

$ rmadison fio
 fio | 1.59-1 | precise/universe | source, amd64, armel, armhf, i386, powerpc
 fio | 2.1.3-1 | trusty/universe | source, amd64, arm64, armhf, i386, powerpc, ppc64el
 fio | 2.2.10-1ubuntu1~ubuntu14.04.1 | trusty-backports/universe | source, amd64, arm64, armhf, i386, powerpc, ppc64el
 fio | 2.2.10-1ubuntu1 | xenial/universe | source, amd64, arm64, armhf, i386, powerpc, ppc64el, s390x
 fio | 3.1-1 | bionic/universe | source, amd64, arm64, armhf, i386, ppc64el, s390x
 fio | 3.16-1 | focal/universe | source, amd64, arm64, armhf, ppc64el, riscv64, s390x
 fio | 3.25-2 | hirsute/universe | source, amd64, arm64, armhf, ppc64el, riscv64, s390x
 fio | 3.25-2 | impish/universe | source, amd64, arm64, armhf, ppc64el, riscv64, s390x

Revision history for this message
Ike Panhc (ikepanhc) wrote :

There are only 1 spare harddrive in our machine and I put it in loop running fio. So far it has been pass 40 run without seeing any unexpected partition in sdb. Do this issue have low Reproducibility?

$ cat fio.sh
#!/bin/bash

dmesg --clear
for i in `seq 1 100`; do
 echo ========== $i ==========
 fio --ioengine=libaio --bs=4K --rw=write --ramp_time=5 --runtime=30 --direct=1 --group_reporting=1 --numjobs=1 --iodepth=128 --time_based --name=job1 --filename=/dev/sdb
 dmesg
done

Revision history for this message
Christian Ehrhardt  (paelzer) wrote : Re: [Bug 1943304] Re: use fio to test this disk and sdX disk add 3 new partitioning

On Tue, Sep 14, 2021 at 9:55 AM Ike Panhc <email address hidden> wrote:
>
> There are only 1 spare harddrive in our machine and I put it in loop
> running fio. So far it has been pass 40 run without seeing any
> unexpected partition in sdb. Do this issue have low Reproducibility?

Hi,
just a thought while seeing this fly by, could it be possible that
the partition in the reported case was always there and the FIO load
only somehow happens to make the kernel re-read the partition table
and populate the entries?

Revision history for this message
Andrew Cloke (andrew-cloke) wrote :

Marking as incomplete while waiting for more information about how to reproduce the issue.

Changed in kunpeng920:
status: New → Incomplete
description: updated
Revision history for this message
Ike Panhc (ikepanhc) wrote :

Thanks for update but we don't have a system with 12 disks. I will see if there is anyway to simulate 12 virtual disks.

Revision history for this message
Ike Panhc (ikepanhc) wrote :

Use KVM to simulate 12 virtual disks but I can not reproduce. Assuming this issue is kunpeng920 specific or we need real harddrive to reproduce.

$ sudo dmesg --clear
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 74.5G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
└─vda2 252:2 0 74G 0 part /
vdb 252:16 0 37.3G 0 disk
vdc 252:32 0 37.3G 0 disk
vdd 252:48 0 37.3G 0 disk
vde 252:64 0 37.3G 0 disk
vdf 252:80 0 37.3G 0 disk
vdg 252:96 0 37.3G 0 disk
vdh 252:112 0 37.3G 0 disk
vdi 252:128 0 37.3G 0 disk
vdj 252:144 0 37.3G 0 disk
vdk 252:160 0 37.3G 0 disk
vdl 252:176 0 37.3G 0 disk
vdm 252:192 0 37.3G 0 disk
$ sudo fio --ioengine=libaio --bs=4K --rw=write --ramp_time=5 --runtime=30 --direct=1 --group_reporting=1 --numjobs=1 --iodepth=128 --time_based --name=job1 --filename=/dev/vdm --name=job2 --filename=/dev/vdb --name=job3 --filename=/dev/vdc --name=job4 --filename=/dev/vdd --name=job5 --filename=/dev/vde --name=job6 --filename=/dev/vdf --name=job7 --filename=/dev/vdg --name=job8 --filename=/dev/vdh --name=job9 --filename=/dev/vdi --name=job10 --filename=/dev/vdj --name=job11 --filename=/dev/vdk --name=job12 --filename=/dev/vdl > /dev/null
$ dmesg
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 74.5G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
└─vda2 252:2 0 74G 0 part /
vdb 252:16 0 37.3G 0 disk
vdc 252:32 0 37.3G 0 disk
vdd 252:48 0 37.3G 0 disk
vde 252:64 0 37.3G 0 disk
vdf 252:80 0 37.3G 0 disk
vdg 252:96 0 37.3G 0 disk
vdh 252:112 0 37.3G 0 disk
vdi 252:128 0 37.3G 0 disk
vdj 252:144 0 37.3G 0 disk
vdk 252:160 0 37.3G 0 disk
vdl 252:176 0 37.3G 0 disk
vdm 252:192 0 37.3G 0 disk

Revision history for this message
Ike Panhc (ikepanhc) wrote :

Since there is no data loss and no kernel crash, I will set importance to low and try to reproduce or find the fix for this issue.

Changed in kunpeng920:
importance: Undecided → Low
Revision history for this message
Ike Panhc (ikepanhc) wrote :

We need 12 physical disks for reproducing and current we do not have such environment. Since there is no data loss nor system crash for this issue, I will set this bug to "Won't fix" and if we find out the patch to fix it, we can re-open.

Changed in kunpeng920:
status: Incomplete → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.