dmraid fails to activate isw due to incorrect disk size (dmraid rc15)

Bug #337284 reported by Danny Wood on 2009-03-03
2
Affects Status Importance Assigned to Milestone
dmraid (Debian)
Fix Released
Unknown
dmraid (Ubuntu)
Undecided
Unassigned

Bug Description

Binary package hint: dmraid

My two ISW raid 0 sets work fine with the previous version of dmraid (rc14 in intrepid).
I have been constantly checking the daily builds of jaunty to see if they works in alpha 4 where they didnt work at all.

With the latest daily build and version of dmraid my RAID sets fail to activate.
Here is the output of dmraid -ay:

sudo dmraid -ay
RAID set "isw_bbbbfiiiih_HD1" already active
RAID set "isw_cjfgejeghi_HD3" already active
ERROR: dos: partition address past end of RAID device
RAID set "isw_bbbbfiiiih_HD11" already active

I have done a bit of checking between my working version of intrepid and this daily build.
On rc14 the output of dmraid -s is as follows:

*** Group superset isw_cjfgejeghi
--> Active Subset
name : isw_cjfgejeghi_HD3
size : 1953536512
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
*** Group superset isw_bbbbfiiiih
--> Active Subset
name : isw_bbbbfiiiih_HD1
size : 312592896
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0

And on RC15:

*** Group superset isw_bbbbfiiiih
--> Active Subset
name : isw_bbbbfiiiih_HD1
size : 312591872
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
*** Group superset isw_cjfgejeghi
--> Active Subset
name : isw_cjfgejeghi_HD3
size : 1953535488
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0

Notice the difference in size, the new one is 1024 sectors less on each set which is where my problem lies.

Both sets were originally built using rc14 (I think),
I am wondering what has changed between the two versions and what information dmraid uses to calculate the drive size, as one of them must be correct.

Let me know if more information is required.

danwood76 ha scritto:
> Public bug reported:
>
> Binary package hint: dmraid
>
> My two ISW raid 0 sets work fine with the previous version of dmraid (rc14 in intrepid).
> I have been constantly checking the daily builds of jaunty to see if they works in alpha 4 where they didnt work at all.
>
> With the latest daily build and version of dmraid my RAID sets fail to activate.
> Here is the output of dmraid -ay:
>
> sudo dmraid -ay
> RAID set "isw_bbbbfiiiih_HD1" already active
> RAID set "isw_cjfgejeghi_HD3" already active
> ERROR: dos: partition address past end of RAID device
> RAID set "isw_bbbbfiiiih_HD11" already active

Can you paste please the output of dmraid -ay -vvv -d ?

Cheers,
Giuseppe.

Danny Wood (danwood76) wrote :
Download full text (5.6 KiB)

Below is the output you requested, also I have dumped the metadata which is contained in the atttchment.

sudo dmraid -ay -vvv -d
NOTICE: creating directory /var/lock/dmraid
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sde: asr discovering
NOTICE: /dev/sde: ddf1 discovering
NOTICE: /dev/sde: hpt37x discovering
NOTICE: /dev/sde: hpt45x discovering
NOTICE: /dev/sde: isw discovering
NOTICE: /dev/sde: jmicron discovering
NOTICE: /dev/sde: lsi discovering
NOTICE: /dev/sde: nvidia discovering
NOTICE: /dev/sde: pdc discovering
NOTICE: /dev/sde: sil discovering
NOTICE: /dev/sde: via discovering
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: /dev/sdd: isw metadata discovered
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: isw metadata discovered
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: isw metadata discovered
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: isw metadata discovered
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching isw_cjfgejeghi
DEBUG: _find_set: not found isw_cjfgejeghi
DEBUG: _find_set: searching isw_cjfgejeghi_HD3
DEBUG: _find_set: searching isw_cjfgejeghi_HD3
DEBUG: _find_set: not found isw_cjfgejeghi_HD3
DEBUG: _find_set: not found isw_cjfgejeghi_HD3
NOTICE: added /dev/sdd to RAID set "isw_cjfgejeghi"
DEBUG: _find_set: searching isw_cjfgejeghi
DEBUG: _find_set: found isw_cjfgejeghi
DEBUG: _find_set: searching isw_cjfgejeghi_HD3
DEBUG: _find_set: searching isw_cjfgejeghi_HD3
DEBUG: _find_set: found isw_cjfgejeghi_HD3
DEBUG: _find_set: found isw_cjfgejeghi_HD3
NOTICE: added /dev/sdc to RAID set "isw_cjfgejeghi"
DEBUG: _find_set: searching isw_bbbbfiiiih
DEBUG: _find_set: not found isw_bbbbfiiiih
DEBUG...

Read more...

Danny Wood (danwood76) wrote :

Just an update on this, it appears that dmraid is calculating the RAID0 cylinders differently from the previous version.
And as such any partition going right to the end causes it confusion.

Basically on my first RAID set the smallest drive is 9729 cylinders, so with a RAID0 array 2x9729 is 19458, this is the number of cylinders that dmraid-rc14 calculates.
dmraid-rc15 on the other hand is a little bad at maths and calculates it to be 19457 cylinders.
I was thinking that the cylinder count should be even given a RAID0 should be symmetrical to function properly.

My partitions go all the way to the end of the drive and that is why my extended partition didn't load, to solve this I spent ages resizing my partitions, the worst was my large drive as its completely NTFS and it had all sorts of issues.

So now I am running Jaunty with all my drives loaded, but the bug still remains with anyone else who migrates and has used all of of the RAID0 partition space.

Hope this helps

Hi!

danwood76 ha scritto:
> Just an update on this, it appears that dmraid is calculating the RAID0 cylinders differently from the previous version.
> And as such any partition going right to the end causes it confusion.
>
> Basically on my first RAID set the smallest drive is 9729 cylinders, so with a RAID0 array 2x9729 is 19458, this is the number of cylinders that dmraid-rc14 calculates.
> dmraid-rc15 on the other hand is a little bad at maths and calculates it to be 19457 cylinders.
> I was thinking that the cylinder count should be even given a RAID0 should be symmetrical to function properly.
>
> My partitions go all the way to the end of the drive and that is why my
> extended partition didn't load, to solve this I spent ages resizing my
> partitions, the worst was my large drive as its completely NTFS and it
> had all sorts of issues.
>
> So now I am running Jaunty with all my drives loaded, but the bug still
> remains with anyone else who migrates and has used all of of the RAID0
> partition space.
>
> Hope this helps
>

Many thanks for this, could you please report this issue to the ataraid[1] ml
upstream please?

[1]https://www.redhat.com/mailman/listinfo/ataraid-list

Cheers,
Giuseppe.

Changed in dmraid (Ubuntu):
status: New → Confirmed
Changed in dmraid (Debian):
status: Unknown → New
Changed in dmraid (Debian):
status: New → Confirmed
Luke Yelavich (themuso) wrote :

Please test the proposed fix by installing the dmraid packages from my PPA when built, http://launchpad.net/~themuso/+archive

Since there are test packages there for audio, I suggest running "sudo apt-get --reinstall install dmraid" rather than doing an upgrade, to ensure you only get the dmraid packages. Then I suggest removing the PPA from your sources.list.

Please test ASAP, as I'd like to get this fix in for jaunty final, to allow people experiencing a similar problem to be able to install to dmraid arrays.

Thanks.

Danny Wood (danwood76) wrote :

Unfortunatly because of the workaround I did I cant actually test fully.
I can obviously view the calculation.
BVut to test fully I will have to dig out a couple of sata drives and build them on rc14 to test.

But if it reverts what Giuseppe pointed out as the difference between the two versions then it should work fine.

Danny Wood (danwood76) wrote :

I have only one of the original sets left from this bug as I had an upgrade.
But the one set calculates as it did in rc14:

*** Group superset isw_cjfgejeghi
--> Active Subset
name : isw_cjfgejeghi_HD3
size : 1953536512
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0

Subscribing ubuntu-release for an ACK and a diff to show the changes.

 subscribe ubuntu-release

Luke Yelavich (themuso) wrote :

Release managers, feel free to reject if you don't think this is appropriate for the release. If you do, I'll push it through as an SRU. The upload is in the queue should you wish to approve it.

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package dmraid - 1.0.0.rc15-6ubuntu2

---------------
dmraid (1.0.0.rc15-6ubuntu2) jaunty; urgency=low

  * debian/patches/16_fix_isw_sectors_calculation.patch: Fix isw raid0 incorrect
    sectors calculation, thanks to Valentin Pavlyuchenko (LP: #337284) Fix
    taken from dmraid debian git 3011c19ddc066efe1ae8331f6b5f183af55e1911.

 -- Luke Yelavich <email address hidden> Mon, 20 Apr 2009 09:58:18 +1000

Changed in dmraid (Ubuntu):
status: Confirmed → Fix Released
Changed in dmraid (Debian):
status: Confirmed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.