Comment 5 for bug 1817713

Revision history for this message
In , bugproxy (bugproxy-redhat-bugs) wrote :

------- Comment From <email address hidden> 2017-06-08 13:55 EDT-------
I just ran a fresh installation enabling raid on a 4k block disk and I could not reproduce the problem stated on additional notes "the issue was originally noted at installation-time". Here are the information right after the first boot:

[root@rhel-grub ~]# uname -a
Linux rhel-grub 3.10.0-514.el7.ppc64le #1 SMP Wed Oct 19 11:27:06 EDT 2016 ppc64le ppc64le ppc64le GNU/Linux

[root@rhel-grub ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)

[root@rhel-grub ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md126 9.0G 1.2G 7.9G 13% /
devtmpfs 8.0G 0 8.0G 0% /dev
tmpfs 8.0G 0 8.0G 0% /dev/shm
tmpfs 8.0G 14M 8.0G 1% /run
tmpfs 8.0G 0 8.0G 0% /sys/fs/cgroup
/dev/md127 1018M 145M 874M 15% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0

[root@rhel-grub ~]# cat /proc/mdstat
md126 : active raid1 sdb1[1] sda2[0]
9423872 blocks super 1.2 [2/2] [UU]
bitmap: 1/1 pages [64KB], 65536KB chunk

md127 : active raid1 sdb2[1] sda3[0]
1048512 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

[root@rhel-grub ~]# grub2-probe --device /dev/md126 --target fs_uuid
5de99add-1cf2-41f0-ba54-c08067e404d4

[root@rhel-grub ~]# grub2-probe --device /dev/md127 --target fs_uuid
d48f8f83-717b-405e-9e7b-02ba37de959a

[root@rhel-grub ~]# parted /dev/sda u s p
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 2621440s
Sector size (logical/physical): 4096B/4096B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 256s 1279s 1024s primary boot, prep
2 1280s 2359295s 2358016s primary raid
3 2359296s 2621439s 262144s primary raid

[root@rhel-grub ~]# parted /dev/sdb u s p
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sdb: 2621440s
Sector size (logical/physical): 4096B/4096B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 256s 2358271s 2358016s primary raid
2 2358272s 2620415s 262144s primary raid

I will do another installation without raid and then migrate it to raid to check if the problem happens.

So, for now, can someone confirm this problem happens during install time?