Comment 8 for bug 1756315

Revision history for this message
Matthew Ruffell (mruffell) wrote :

Hello Alexandre,

I tried to reproduce this bug, and I believe it has been fixed.

I started a i3.4xlarge instance on AWS, with Xenial as the distro:

$ uname -rv
4.4.0-1112-aws #124-Ubuntu SMP Fri Jul 24 11:10:25 UTC 2020

From there, I checked the NVMe disks:

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
nvme0n1 259:0 0 1.7T 0 disk
nvme1n1 259:1 0 1.7T 0 disk

Made a Raid 0 array:

$ sudo mdadm --create --verbose --level=0 /dev/md0 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started

And formatted it:

$ time sudo mkfs.xfs /dev/md0
meta-data=/dev/md0 isize=512 agcount=32, agsize=28989568 blks
         = sectsz=512 attr=2, projid32bit=1
         = crc=1 finobt=1, sparse=0
data = bsize=4096 blocks=927666176, imaxpct=5
         = sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=452968, version=2
         = sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

real 0m24.414s
user 0m0.000s
sys 0m7.664s

I also tried to fstrim:

$ sudo mkdir /mnt/disk
$ sudo mount /dev/md0 /mnt/disk
$ sudo fstrim /mnt/disk
$ time sudo fstrim /mnt/disk

real 0m22.083s
user 0m0.000s
sys 0m7.560s

Things seem okay, I think this has been fixed somewhere between 4.4.0-1052-aws
and 4.4.0-1112-aws.

I'm going to mark this bug as fixed released, but let me know if you still have
problems.