Comment 5 for bug 140854

Revision history for this message
netslayer (netslayer007) wrote :

I just tried the solution in the dup bug https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/139802

It looks like /usr/share/mdadm/mkconf >/etc/mdadm/mdadm.conf is detecting an array I dont have from somewhere. You can see the output of my mdadm.conf is totally messed up and this makes sense now why it's trying to add a drive to the array that doesnt exist. It could be an older one I had.. ??

ARRAY /dev/md0 level=raid5 num-devices=4 UUID=2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=db874dd1:759d986d:cc05af5c:cfa1abed
ARRAY /dev/md1 level=raid5 num-devices=5 UUID=9c2534ac:1de3420b:e368bf24:bd0fce41

mdadm -E /dev/sdf
UUID : db874dd1:759d986d:cc05af5c:cfa1abed

mdadm -E /dev/sdh
UUID : db874dd1:759d986d:cc05af5c:cfa1abed

Interesting, so what happened is that I bought 3 new drives when I created this array, and the remaining two I formatted and put in here are the ones that still have the old UUID's. Since I applied the raid array in partitions for these drives instead of the physical device I have two UUIDs for two of my drives. Then I guess udev finds that at different times and causes it to fail if it tries to bring up the old array and add it to an existing one.. total race condition.

So how do I get rid of the other UUIDs safely..