madam allows growing an array beyond metadata size limitations

Bug #794963 reported by traderbam@yahoo.co.uk
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
New
Undecided
Unassigned

Bug Description

Binary package hint: mdadm

It is possible to command mdadm to grow an array such that the array space on a component partition exceeds the maximum size writeable in metadata 0.90 format, which is just over 2TB (4 bytes representing sector size). When told to do this, mdadm appears to do it without error and writes a bogus sector count into the 4 byte container in the super-blocks. Now the system operates with the over-enlarged array without apparent issue but only until the next reboot when the system is told the array size based on the superblock value. User data becomes inaccessible/LVMs don't mount with seemingly no way to recover the inaccessible data.
Obviously, mdadm should refuse to grow the array size beyond the size restriction of its own metadata.
Seen using mdadm - v2.6.7.1 - 15th October 2008 and Ubuntu server 10.04 64-bit

Tags: mdadm
Revision history for this message
slith (jeff-goris) wrote :
Download full text (4.3 KiB)

Here is what happened...

I recently upgraded a system running 6 x 2TB HDDs with an EFI motherboard and 6 x 3TB HDDs. The final step in the process was growing the RAID-5 array using metadata v0.90 (/dev/md1) consisting of 6 component devices of just under 2TB each to use devices of just under 3TB each. At the time I forgot about the limitation that metadata 0.90 does not support component devices over 2TB. However, the grow completed successfully and I was using the system just fine for about 2 eeks. LVM2 is using /dev/md1 as a physical volume for volume group radagast and pvdisplay showed that /dev/md1 has a size of 13.64 TiB. I had been writing data to it regularly and I believe that I had well exceeded the original size of the old array (about 9.4TB). All was fine until a few days ago when I rebooted the system. The system booted back up to a point, when it could not mount some of the file systems that were on logical volumes on /dev/md1. So, it seems that the mdadm --grow operation was successful, but upon boot the mdadm --assemble operation completed, but not with the same size array as after the grow operation. Here is some relavant information:

$ sudo pvdisplay /dev/md1
  --- Physical volume ---
  PV Name /dev/md1
  VG Name radagast
  PV Size 13.64 TiB / not usable 2.81 MiB
  Allocatable yes
  PE Size 4.00 MiB
  Total PE 3576738
  Free PE 561570
  Allocated PE 3015168
  PV UUID 0ay0Ai-jcws-yPAR-DP83-Fha5-LZDO-341dQt

Detail of /dev/md1 after the attempt to reboot. Unfortunately I don't have any detail of the array prior to the grow or reboot. However, the pvdisplay above does show the 13.64 TiB size of the array after the grow operation.

$ sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Wed May 20 17:19:50 2009
     Raid Level : raid5
     Array Size : 3912903680 (3731.64 GiB 4006.81 GB)
  Used Dev Size : 782580736 (746.33 GiB 801.36 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jun 10 00:35:43 2011
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 256K

           UUID : 6650f3f8:19abfca8:e368bf24:bd0fce41
         Events : 0.6539960

    Number Major Minor RaidDevice State
       0 8 3 0 active sync /dev/sda3
       1 8 19 1 active sync /dev/sdb3
       2 8 67 2 active sync /dev/sde3
       3 8 51 3 active sync /dev/sdd3
       4 8 35 4 active sync /dev/sdc3
       5 8 83 5 active sync /dev/sdf3

Code:

# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Wed May 20 17:19:50 2009
     Raid Level : raid5
     Array Size : 3912903680 (3731.64 GiB 4006.81 GB)
  Used Dev Size : 782580736 (746.33 GiB 801.36 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    ...

Read more...

Revision history for this message
slith (jeff-goris) wrote :

I probably should have mentioned that the command to grow after replacing each of the siz 2TB HDDs with 3TB HDDs one-by-one, I used the following command to grow the array:

sudo mdadm --grow --size=max /dev/md1

Revision history for this message
slith (jeff-goris) wrote :

Can confirm that this is repeatable.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.