Comment 13 for bug 1190295

bl8n8r (bl8n8r-gmail) wrote :

:: could you confirm that these would be various different kernel versions over time, and not ONLY on the 46 to 47 update?

That's probable. IIRC, my RAID1 MD devices have been blowing up inexplicably over the past year, and I tend to stay quite current on patches, applying them each week regardless.

:: Have you seen this when doing any other reboots (not after system updates)?

No, it really seems related to kernel updates. The system I reported on had been pulled
out of production for hardware updates, so was powered off. In the time it took to replace hardware and test the new system new kernel patches had come out so I suspected things were going to get broken as soon as I applied the kernel patches.

:: do you also see the MD "degraded raid" prompt and if so how do you respond?

No, I only ever see the "Continue to wait; or Press S to skip mounting or M for manual recovery". Nothing about MD degraded.

:: are the device links in /dev/<vgname>/<lvname> present?
:: are the LVs listed in the output of 'lvs' and what are their state?
:: are the PVs which are backed by md0 present in 'pvs' and what are their state?

Don't remember about /dev/<vgname>. lvs, vgs and pvs all were sporadic in output, sometimes complaining of leaked memory, sometimes displaying my LVs and PVs. It was unstable.

:: are the volumes present in the 'dmsetup ls' output?

Never used 'dmsetup ls'.

:: what is the actual state of md0 as show in 'cat /proc/mdstat'?

In the unstable state, after rebooting with patched kernel and RAID+LVM borked, sometimes mdstat would say something about the device not existing and then issuing it again would show "/dev/md_d0" -- which is not the correct MD device. Sometimes I would
have to issue "mdadm -S /dev/md_d0" and then "mdadm --examine --scan" to restart it. I have never seen Linux software raid fail this badly. Hopefully it's fixed soon as I know from experience MD has historically been bulletproof. I have never had such problems before.