I submitted this in reference to 990913 but now think that relates to the mdadm/udev racing condition discussed in 917250. My issue is that a attached, degraded, devices which should not be required, are preventing my setup from booting:-
I have a similar problem, but suspect the issue I'm having means it must be either down to code in the kernel or options unique to my ubuntu/kernel config, or something in my initramfs. /proc/version reports: 3.2.0-30-generic.
In my case, I have 5 disks in my system, 4 are on a backplane, connected directly to the motherboard, and the 5th is connected where the cd should be.
These come up on linux as /dev/sd[a-d] on the motherboard and /dev/sde on the 5th disk.
I have uinstalled the OS entirely on the 5th disk, and configured grub/fstab to identify all partitions by UUID. fstab does not reference any disks in /dev/sd[a-d].
The intention being, to install software raid on the a-d disk, to present as /dev/md0
I created a RAID5 array with 3+spare, and one of the disks died. So I have a legitimated degraded array, which the OS should not need to boot.
However it won't boot either with 'bootdegraded=true' or not
Not sure editing mdadm functions will help as really I don't want any md functions to run at initramfs time. They can all wait until after it's booted.
Any thoughts on how I can turn off mdadm completely from initramfs?
I submitted this in reference to 990913 but now think that relates to the mdadm/udev racing condition discussed in 917250. My issue is that a attached, degraded, devices which should not be required, are preventing my setup from booting:-
I have a similar problem, but suspect the issue I'm having means it must be either down to code in the kernel or options unique to my ubuntu/kernel config, or something in my initramfs. /proc/version reports: 3.2.0-30-generic.
In my case, I have 5 disks in my system, 4 are on a backplane, connected directly to the motherboard, and the 5th is connected where the cd should be.
These come up on linux as /dev/sd[a-d] on the motherboard and /dev/sde on the 5th disk.
I have uinstalled the OS entirely on the 5th disk, and configured grub/fstab to identify all partitions by UUID. fstab does not reference any disks in /dev/sd[a-d].
The intention being, to install software raid on the a-d disk, to present as /dev/md0
I created a RAID5 array with 3+spare, and one of the disks died. So I have a legitimated degraded array, which the OS should not need to boot.
However it won't boot either with 'bootdegraded=true' or not
Not sure editing mdadm functions will help as really I don't want any md functions to run at initramfs time. They can all wait until after it's booted.
Any thoughts on how I can turn off mdadm completely from initramfs?