Comment 18 for bug 396564

Paul McEnery (pmcenery) wrote :

I can also confirm this behaviour. I updated from Jaunty to Karmic (I assume beta at this point), and I have the following configuration:

$ sudo fdisk -l
===============================================================================
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux RAID autodetect
/dev/sda2 14 121601 976655610 fd Linux RAID autodetect

===============================================================================

$ sudo mdadm --detail /dev/md0
===============================================================================
/dev/md0:
        Version : 00.90
  Creation Time : Mon Apr 23 00:17:47 2007
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Oct 22 08:12:49 2009
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 74ad3d60:5074597c:0324c307:5941c2e9
         Events : 0.7296

    Number Major Minor RaidDevice State
       0 0 0 0 removed
       1 8 1 1 active sync /dev/sda1
===============================================================================

$ sudo mdadm --detail /dev/md1
===============================================================================
/dev/md1:
        Version : 00.90
  Creation Time : Mon Apr 23 00:18:02 2007
     Raid Level : raid1
     Array Size : 976655488 (931.41 GiB 1000.10 GB)
  Used Dev Size : 976655488 (931.41 GiB 1000.10 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 22 08:32:24 2009
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 58a33554:66ab860b:f095819a:ef47ab1e
         Events : 0.108323780

    Number Major Minor RaidDevice State
       0 0 0 0 removed
       1 8 2 1 active sync /dev/sda2
===============================================================================

$ sudo pvs
===============================================================================
  PV VG Fmt Attr PSize PFree
  /dev/md1 rootvg lvm2 a- 931.41G 772.00M
===============================================================================

$ sudo vgs
===============================================================================
  VG #PV #LV #SN Attr VSize VFree
  rootvg 1 4 0 wz--n- 931.41G 772.00M
===============================================================================

$ sudo lvs
===============================================================================
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  mythtv rootvg -wi-ao 901.66G
  root rootvg -wi-ao 25.00G
  swap rootvg -wi-ao 1.00G
  varlog rootvg -wi-ao 3.00G
===============================================================================

It is worth noting that I have a RAID 1 which was created with a second disk of `missing', but by the sound of it - most folks here are seeing this with RAID setups which are not running degraded like mine.