Boot hang, apparently blocking on SW RAID rebuild
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
initramfs-tools (Ubuntu) |
Fix Released
|
Medium
|
Unassigned |
Bug Description
I am experiencing this issue on a new Sun Fire V40z Server in a well cooled
machine room. (please refer to http://
Hardware:
- Quad CPU amd64 -- AMD Opteron(tm) Processor 852
- 32G memory
- 2x SCSI disks
RAID config:
- /dev/md0 RAID1
Mount point: /boot
Partitions: /dev/sda1, /dev/sdb1
- /dev/md1 RAID1
Physical device for LVM vg0
$ mount | grep vg0
/dev/
/dev/
/dev/
I didn't try the "acpi=off" option, but was able to temporarily resolve the
situation in this way:
- multiple boots failed with hoary 2.6.10-
- normal, just hit <Enter> boot failed
- append "init=/bin/bash" boot boot failed
- append "single" boot failed
- boot from "live" CD, then "watch cat /proc/mdstat" showed /dev/md1 re-syncing
- after the re-sync was complete, I was able to reboot with kernel 2.6.12.2-bef
without incident (smp kernel)
- then tried booting again from 2.6.10-
Another point of potential interest is the file "/script" on the initrd, as this
is where the RAID arrays are assembled. It contains the following for my system:
mdadm -A /devfs/md/1 -R -u 55f3a23c:
/dev/sdb2
mkdir /devfs/vg0
mount_tmpfs /var
if [ -f /etc/lvm/lvm.conf ]; then
cat /etc/lvm/lvm.conf > /var/lvm.conf
fi
mount_tmpfs /etc/lvm
if [ -f /var/lvm.conf ]; then
cat /var/lvm.conf > /etc/lvm/lvm.conf
fi
mount -nt devfs devfs /dev
vgchange -a y vg0
umount /dev
umount -n /var
umount -n /etc/lvm
ROOT=
mdadm -A /devfs/md/1 -R -u 55f3a23c:
/dev/sdb2
The machine is in use now, and I am unable to perform further tests on it.
However, I will be receiving another one soon (identical, I believe), and will
be able to re-create
this problem and do further testing on it. Please let me know if there are
tests you would like me to perform.
Changed in initramfs-tools: | |
assignee: | adconrad → nobody |
Can you describe the way in which the boot failed?