RAID fails after suspend

Reported by sander on 2011-07-01
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Ubuntu
Undecided
Unassigned

Bug Description

After installing 11.04 and trying to resume from standby (suspend to ram), RAID1 seems to fail, showing the following error message:

md/raid1:md1: Disk failure on sda5, disabling device.
md/raid1:md1: Operation continuing on 1 devices.
md/raid1:md2: Disk failure on sda6, disabling device.
md/raud1:md2: Operation continuing on 1 devices.
journal commit I/O error

The error is reproducible. I tried it a second time after recreating the array and doing a fresh install. The system works fine until i suspend to ram and try to resume. The system wakes up, but RAID is broken afterwards.

Setup:
Encrypted root and home and RAID1:
/dev/md0 (/dev/sda3 + /dev/sdb3): /boot
/dev/md1 (/dev/sda5 + /dev/sdb5): /dev/mapper/root -> /
/dev/md2 (/dev/sda6+ /dev/sdb6): /dev/mapper/home -> /home

sander (sander2) wrote :

After a reboot using the Live-CD md2 is rebuilding and md1 is not detected. Trying to assemble md1 manually fails with:

mdadm: no recogniseable superblock on /dev/sdb5

Fabio Marconi (fabiomarconi) wrote :

Hello
Are you experiencing again this problems with the latest release Oneiric Ocelot ?
Thanks
Fabio
---
Ubuntu Bug Squad volunteer triager
http://wiki.ubuntu.com/BugSquad

Changed in ubuntu:
status: New → Incomplete
sander (sander2) wrote :

I have not updated yet but will do so ASAP and report back.

Changed in ubuntu:
status: Incomplete → New
status: New → Incomplete
Launchpad Janitor (janitor) wrote :

[Expired for Ubuntu because there has been no activity for 60 days.]

Changed in ubuntu:
status: Incomplete → Expired
Justin Grevich (jgrevich) wrote :

I too have experienced this issue. I have an internal and external software raid. After suspend, the external array fails and starts to rebuild (see below syslog) whereas the internal array is fine.

Mar 31 22:58:56 localhost kernel: [ 3667.077378] md/raid:md126: Disk failure on sdq1, disabling device.
Mar 31 22:58:56 localhost kernel: [ 3667.077378] md/raid:md126: Operation continuing on 9 devices.
Mar 31 22:58:56 localhost kernel: [ 3667.077386] md/raid:md126: Disk failure on sdr1, disabling device.

Are there any additional logs that would be helpful? If I remember correctly the mdadm problem is a result of the drives geting new labels (/sdu1) after waking from suspend.

Changed in ubuntu:
status: Expired → Opinion
Dimitri John Ledkov (xnox) wrote :

Why was this marked Opinion, when clearly there is a bug, but it's not trianged.

There are two things here:
* either usb devices are slow and udev doesn't fire quick enough
* or devices get new names, hence mdadm marks drives as failed

I will try to experiment with resuming.

Changed in ubuntu:
status: Opinion → Confirmed
Fabio Marconi (fabiomarconi) wrote :

Hello Dmitrijs Ledkovs
or anyone affected.
Please reproduce the bug then in a terminal run

ubuntu-bug linux

A new report will all the needed log files will be generated.
I close this one lacking log files
Thanks
Fabio
---
Ubuntu Bug Squad volunteer triager
http://wiki.ubuntu.com/BugSquad

Changed in ubuntu:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers