Comment 11 for bug 107249

Revision history for this message
Dwayne Nelson (edn2) wrote :

The latest kernel update has failed. This time the message that I receive is "Fatal: First sector of /dev/sda1 doesn't have a valid boot signature". This does not appear to have anything to do with the RAID - mdstat shows all volumes are up:

  Personalities : [raid1] [raid6] [raid5] [raid4]
  md1 : active raid5 sda2[0] sdd2[2] sdb2[1]
        468856832 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

  md0 : active raid1 sda1[0] sdd1[2] sdb1[1]
        9767424 blocks [3/3] [UUU]

  unused devices: <none>

I am not sure why the drive devices keep showing up in a different order - note that last time sdc, sdb, and sdd were listed - this time sda, sdd, and sdb are listed. I hope I don't have to look forward to a different problem every time the kernel is updated ...