I see 2 problems with multipath and LVM: 1) friendly_names are mpath[a-z], not mpath[0-9] 2) hardware raid arrays (iprconfig) named as 1IBM_IPR-0_XXXXXXXX This controller has dual SAS ports, which duplicates disk paths and enables multipath. This setup uses RAID0 for the IPR disks, which gives us IO performance enhancements. I recreated the RAID array with iprconfig, adding all disks to the RAID0 array. Removed the node from MaaS (since re-comissioning did not refresh disk settings correctly), and re-added to MaaS. Restarted comissioning and it recognizes now the array as a single disk with 3TB. Deployed Xenial, and got stuck on initramfs: Begin: Running /scripts/local-block ... lvmetad is not active yet, using direct activation during sysinit Volume group "mpath0" not found Cannot process volume group mpath0 done. Begin: Running /scripts/local-block ... lvmetad is not active yet, using direct activation during sysinit Volume group "mpath0" not found Cannot process volume group mpath0 done. done. Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enough?) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/mpath0-part2 does not exist. Dropping to a shell! [ 258.070061] hidraw: raw HID events driver (C) Jiri Kosina [ 258.071998] usbcore: registered new interface driver usbhid [ 258.072047] usbhid: USB HID core driver BusyBox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) (initramfs) cat /proc/cmdline root=/dev/mapper/mpath0-part2 ro disk-detect/multipath/enable=true net.ifnames=1 biosdevname=0 (initramfs) ls /dev/mapper control 1IBM_IPR-0_5EC63900000001E0-part1 1IBM_IPR-0_5EC63900000001E0-part2 (initramfs) cat /etc/multipath.conf # This file was created by curtin while installing the system. defaults { user_friendly_names yes } (initramfs) cat /etc/multipath/wwids # Multipath wwids, Version : 1.0 # NOTE: This file is automatically maintained by multipath and multipathd. # You should not need to edit this file in normal circumstances. # # Valid WWIDs: /1IBM_IPR-0_5EC63900000001E0/ (initramfs) /sbin/blkid /dev/sda2: LABEL="root" UUID="a4fa6afa-c553-4cfc-ae6f-82e0637edd5a" TYPE="ext4" PARTUUID="723d18ee-be3b-4e28-84fa-2d444d69c174" /dev/sdb2: LABEL="root" UUID="a4fa6afa-c553-4cfc-ae6f-82e0637edd5a" TYPE="ext4" PARTUUID="723d18ee-be3b-4e28-84fa-2d444d69c174" /dev/mapper/1IBM_IPR-0_5EC63900000001E0-part2: LABEL="root" UUID="a4fa6afa-c553-4cfc-ae6f-82e0637edd5a" TYPE="ext4" PARTUUID="723d18ee-be3b-4e28-84fa-2d444d69c174" /dev/sda1: PARTUUID="f5089e1b-cd09-4018-a0b4-6058474c2caa" /dev/sdb1: PARTUUID="f5089e1b-cd09-4018-a0b4-6058474c2caa" /dev/dm-0: PTUUID="5df0b342-bbc9-478d-b75a-e77308ceaa26" PTTYPE="gpt" /dev/mapper/1IBM_IPR-0_5EC63900000001E0-part1: PARTUUID="f5089e1b-cd09-4018-a0b4-6058474c2caa" (initramfs) /sbin/multipath -ll 1IBM_IPR-0_5EC63900000001E0 dm-0 IBM,IPR-0 5EC63900 size=4.2T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 0:2:0:0 sda 8:0 active ready running `-+- policy='round-robin 0' prio=10 status=enabled `- 1:2:0:0 sdb 8:16 active ready running --- When I manually fixed mpath0 problem, I see: cat /etc/default/grub.d/50-curtin-multipath.cfg GRUB_DEVICE=/dev/mapper/mpath0-part3 GRUB_DISABLE_LINUX_UUID=true If MaaS is just getting the info, not "guessing" mpath0, so maybe curtin ?