Comment 3 for bug 1209423

Revision history for this message
A1an (alan-b) wrote :

Answering questions from the above linked bug:

> It's possible that the patches which were linked are not related. They're not in the delta between the two versions which are isted.

I see, I did not realize the kernel versions (using a dot for the last digits) are different from the ubuntu ones (using a -).

> 1. You mention that this occurs on various upgrades, could you confirm that these would be various different kernel versions over time, and not ONLY on the 46 to 47 update?

Indeed I tested it with each new update from 47 to 50 (current one) and the issue is there on all of them while booting into 46 works fine

> 2. Have you seen this when doing any other reboots (not after system updates)?
> (such as might occur if this was a boot race - which could only shows up because the first boot after an update performs additional operations)

No, booting with 46 always bring the md0 array up while others 47-50 always fail to do so.

> 3. When this occurs you mention that you see the 'M' for manual recovery, do you also see the MD "degraded raid" prompt and if so how do you respond?

I did not try manual recovery (did not want to broke the array by manipulating it with potentially "unsafe" environment)

> 4. On taking the 'M' that option LVM slices are missing which are served from the RAID, can you provide the following information for missing LVs (backed by md0):

I do not have LVM on my RAID device as the original reporter of the other bug. One of the following applies however:

  E) what is the actual state of md0 as show in 'cat /proc/mdstat'?

md0 is not listed at all in /proc/mdstat

Furthermore I attach (mdadm-examine-broken.txt) the output of mdadm --examine on one of the RAID elements (/dev/sda6) which looks quite weird to me with a long series of failed where there should be only two elements in the array.