Lucid installer creates bad RAID setup
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
debian-installer (Ubuntu) |
New
|
Undecided
|
Unassigned |
Bug Description
Binary package hint: debian-installer
Installed 10.04 Beta 2 on to 2 500GB hard disks. Partitions were as follows:
/dev/sda1 1GB
/dev/sda2 The rest
/dev/sdb1 1GB
/dev/sdb2 The rest
/dev/sda1 and /dev/sdb1 were combined into /dev/md0
/dev/md0 was formatted as ext4 and assigned to be /boot.
/dev/sda2 and /dev/sdb2 were combined into /dev/md1
/dev/md1 was selected as physical device for LVM
LVM was set up with all the other expected partitions swap, /, /opt, /var, /home, /tmp
After reboot, splash screen complained about not being able to mount /boot.
Dropped to a prompt to find:
- /dev/md0 had not been started
- /dev/md1 had been started
Started /dev/md0
mdadm --detail /dev/md0 reported that disk 0 was removed and disk 1 was /dev/md1p1
mdadm --detail /dev/md1 reported that disk 0 was /dev/sda and disk 1 was /dev/sdb
I suspect that, somehow, the installer is not using the components which it was told to do, and /dev/md1 is using components which overlap with /dev/md0, thus corrupting the data contained therein and rendering it unmountable.
Retested under the RC.
Additional info:
Dropping to a console under the installer shows correct output from mdadm --detail /dev/mdX, so it looks like this is only a problem after reboot, like it's not committing the settings correctly when finishing up.