(Ooops, apparently hit the wrong key... continuing the previous comment) ...shut down with Alt-SysRq REISUB. This has no effect whatsoever. The screen doesn't change; the drive activity light does nothing. Finally, after stewing for a while longer, I hold down the power switch until I hear all the fans powering down. Then I boot up. I see no error messages. Everything seems to be working fine, except the part about having to boot it three or four times before it actually gets past the GRUB splash screen and arrives at the Ubuntu splash screen. After that, everything looks great... I log in, and get to Unity, and I never saw any error message going by. Then, the first thing I do is start up palimpsest and check the drives and arrays. The drives are always fine, but generally about half of the arrays are degraded. Sometimes it will start re-syncing one of the arrays all by itself; usually it starts with an array that I don't care so much about, and I can't do anything about the ones with more important data until later, because apparently palimpsest can only change one RAID-related thing at a time. Which means that sometimes I have to wait for maaaaaaaany hours to start working on the next array. The worst I've seen was the time it detached two drives from my RAID6 array. Very scary. I have one RAID6 array, one RAID10 array, and several RAID1 arrays. I think all of them have degraded at one time or another. This bug seems to be an equal opportunity degrader. Usually I find two or three of the larger arrays are degraded, plus several detached spares on other arrays. This system has six 2TB drives. I think some of them have 512 byte sectors, and some have 2048 byte sectors; how the heck do you tell, anyway? All use GPT partitions, and care has been taken to align all partitions on 1MB boundaries (palimpsest actually reports if it finds alignment issues). The system has two SATA controllers. I put four drives on one controller, and two on the other, and for the RAID1 and RAID10 arrays I make sure there are no mirrors where both parts are on the same controller, or both parts on drives made by the same company. Except, that isn't really true any more; whenever something gets degraded and I have to re-attach and re-sync, the array members often get rearranged. I think most of my spares are now concentrated on a couple of drives, which isn't really what I had planned. I've given up on rearranging the drives to my liking, for the duration. In fact, for the duration, I've given up on this system. I've been gradually moving data off it, onto another system, which is running Maverick, and it will continue to run Maverick because it doesn't try to rearrange my data storage every time I look at it sideways. (Verrrrrry gradually, since NFS has been broken for the better part of a year...) This nice expensive Oneiric system will be dedicated to the task of rebooting, re-attaching, and re-syncing, until Oneiric starts to behave itself. I am planning to also install Precise (multiboot) so I can test that too. Attempting an OS install while partitions are borking themselves on every other reboot sounds like fun. BTW, I watched the UDS "Software RAID reliability" session video from last Tuesday: https://www.youtube.com/watch?v=RpC-dkgN37M&list=UUWUDCz-Q0m4qK7lkK4CevQA&index=2&feature=plcp I was quite pleased to see that people are working on these problems. (But I was particularly surprised to learn how many people there were completely unaware that Ubuntu rearranges device names (i.e. /dev/sda etc.) at each reboot. I noticed that a really long time ago.)