RAID1 Test Failed: Device need to be readded manually

Bug #791454 reported by Adam Sommer
18
This bug affects 2 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Opinion
High
Ubuntu Server
Oneiric
Won't Fix
High
Ubuntu Server

Bug Description

When testing RAID1 on x86 during step 16 'i', after reattaching the second disk not all arrays came back. Two devices needed to be readded manually. Reporting this bug as mentioned in step "i". I was testing using a KVM VM with two qcow2 disk images and virtio.

If there is any further details I can provide please let me know.

Same setup failed for x86_64.

tags: added: iso-testing
Revision history for this message
Adam Sommer (asommer) wrote :

Alpha1 server x86_64 also failed to automatically readd the devices to the array.

description: updated
summary: - Oneiric Alpha1 Server x86 RAID1 Test Failed
+ Oneiric Alpha1 Server x86 and x86_64 RAID1 Test Failed
affects: ubuntu → mdadm (Ubuntu)
Changed in mdadm (Ubuntu):
importance: Undecided → High
summary: - Oneiric Alpha1 Server x86 and x86_64 RAID1 Test Failed
+ Oneiric Alpha1 Server x86 and x86_64 RAID1 Test Failed: Device need to
+ be readded manually
Revision history for this message
Brian Murray (brian-murray) wrote : Re: Oneiric Alpha1 Server x86 and x86_64 RAID1 Test Failed: Device need to be readded manually

Could you provide a link to the test case you were using? Thanks in advance.

tags: added: oneiric
Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

this is the server raid 1 test:
http://testcases.qa.ubuntu.com/Install/ServerRAID1

---
Ubuntu Bug Squad volunteer triager
http://wiki.ubuntu.com/BugSquad

Dave Walker (davewalker)
tags: added: server-ors
Changed in mdadm (Ubuntu):
assignee: nobody → Ubuntu Server Team (ubuntu-server)
Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

Still an issue with Alpha2

Changed in mdadm (Ubuntu):
status: New → Confirmed
summary: - Oneiric Alpha1 Server x86 and x86_64 RAID1 Test Failed: Device need to
- be readded manually
+ RAID1 Test Failed: Device need to be readded manually
Revision history for this message
James Page (james-page) wrote :

dmesg info:

Jul 7 12:19:46 ubuntu kernel: [ 2.547818] md: bind<vdb1>
Jul 7 12:19:46 ubuntu kernel: [ 2.561265] md: bind<vdb3>
Jul 7 12:19:46 ubuntu kernel: [ 2.575893] 8139too: 8139too Fast Ethernet driver 0.9.28
Jul 7 12:19:46 ubuntu kernel: [ 2.586221] md: bind<vdb2>
Jul 7 12:19:46 ubuntu kernel: [ 2.609279] md: bind<vda2>
Jul 7 12:19:46 ubuntu kernel: [ 2.614517] md: bind<vda1>
Jul 7 12:19:46 ubuntu kernel: [ 2.616367] md: kicking non-fresh vdb1 from array!
Jul 7 12:19:46 ubuntu kernel: [ 2.617030] md: unbind<vdb1>
Jul 7 12:19:46 ubuntu kernel: [ 2.617515] md: export_rdev(vdb1)
Jul 7 12:19:46 ubuntu kernel: [ 2.619612] bio: create slab <bio-1> at 1
Jul 7 12:19:46 ubuntu kernel: [ 2.623253] md/raid1:md1: active with 2 out of 2 mirrors
Jul 7 12:19:46 ubuntu kernel: [ 2.623882] md1: detected capacity change from 0 to 321900544
Jul 7 12:19:46 ubuntu kernel: [ 2.624530] md/raid1:md0: active with 1 out of 2 mirrors
Jul 7 12:19:46 ubuntu kernel: [ 2.625143] md0: detected capacity change from 0 to 1499451392
Jul 7 12:19:46 ubuntu kernel: [ 2.642269] md1: unknown partition table
Jul 7 12:19:46 ubuntu kernel: [ 2.642972] md0: unknown partition table
Jul 7 12:19:46 ubuntu kernel: [ 2.647905] md: bind<vda3>
Jul 7 12:19:46 ubuntu kernel: [ 2.650173] md: kicking non-fresh vdb3 from array!
Jul 7 12:19:46 ubuntu kernel: [ 2.650772] md: unbind<vdb3>
Jul 7 12:19:46 ubuntu kernel: [ 2.651301] md: export_rdev(vdb3)
Jul 7 12:19:46 ubuntu kernel: [ 2.655692] md/raid1:md2: active with 1 out of 2 mirrors
Jul 7 12:19:46 ubuntu kernel: [ 2.656401] md2: detected capacity change from 0 to 322949120
Jul 7 12:19:46 ubuntu kernel: [ 2.659134] md2: unknown partition table

Revision history for this message
James Page (james-page) wrote :

It would appear that the swap mirror re-starts OK; the ext4 filesystem mirrors do not - might be related:

ubuntu@ubuntu:/var/log$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 vda2[0] vdb2[1]
      314356 blocks super 1.2 [2/2] [UU]

md2 : active raid1 vda3[0]
      315380 blocks super 1.2 [2/1] [U_]

md0 : active raid1 vda1[0]
      1464308 blocks super 1.2 [2/1] [U_]

unused devices: <none>
ubuntu@ubuntu:/var/log$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 1.4G 889M 448M 67% /
none 238M 216K 237M 1% /dev
none 247M 0 247M 0% /dev/shm
none 247M 44K 247M 1% /var/run
none 247M 0 247M 0% /var/lock
/dev/md2 299M 11M 273M 4% /home
ubuntu@ubuntu:/var/log$

Dave Walker (davewalker)
tags: added: server-o-rs
removed: server-ors
James Page (james-page)
Changed in mdadm (Ubuntu Oneiric):
assignee: Ubuntu Server Team (ubuntu-server) → James Page (james-page)
status: Confirmed → In Progress
Revision history for this message
James Page (james-page) wrote :

I reproduced this exactly using the Natty release ISO's so I don't believe this is actually a regression in behaviour for the Oneiric Alpha 2 release.

Changed in mdadm (Ubuntu Oneiric):
assignee: James Page (james-page) → Ubuntu Server Team (ubuntu-server)
status: In Progress → Opinion
Revision history for this message
James Page (james-page) wrote :

So I've marked this bug as 'Opinion' based on my comment in #7.

I know that the ISO test tracker states that the RAID-1 devices should re-sync automatically but it would appear that this has not been the case since Natty (at least)

Revision history for this message
Dave Walker (davewalker) wrote :

If a drive fails, re-adding to the array should require manual intervention. It's a warning sign there is a problem.

The test case is being updated to reflect this. Not release noting it, as this behaviour has gone back for some time.

Thanks.

Changed in mdadm (Ubuntu Oneiric):
status: Opinion → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.