dmraid fails to read promise RAID sector count larger than 32-bits

Bug #599255 reported by Nishihama Kenkowo
44
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Baltix
New
Undecided
Unassigned
dmraid (Fedora)
New
Undecided
Unassigned
dmraid (Ubuntu)
Triaged
Medium
Unassigned

Bug Description

I have two amd sb7*** motherbord. I tryed two case.

I use raid0(1.5TB x 3=4.5TB) by bios(SB7*0).

I partitioned two array. 2.0TB(A) and 2.5TB(B).

                   win7-64
                   winxp32 ubuntu10.4 /fedora13
------------------------------------------------------------------
raid-A 2.0TB ok ok all capacity OK
raid-B 2.5TB ok (all) no*1 only 300GB(NG)
------------------------------------------------------------------
*1=ubuntu knows only 300gb. fedora too.

ubuntu x64/Fedora13 x64,by DMRAID

description: updated
tags: added: 2tb dmraid
summary: - fakeraid cannot use over 2TB raid0
+ dmraid cannot use over 2TB raid0
Revision history for this message
Mitch Towner (kermiac) wrote : Re: dmraid cannot use over 2TB raid0

Thank you for taking the time to report this bug and helping to make Ubuntu better. This bug did not have a package associated with it, which is important for ensuring that it gets looked at by the proper developers. You can learn more about finding the right package at https://wiki.ubuntu.com/Bugs/FindRightPackage. I have classified this bug as a bug in dmraid.

When reporting bugs in the future please use apport, either via the appropriate application's "Help -> Report a Problem" menu or using 'ubuntu-bug' and the name of the package affected. You can learn more about this functionality at https://wiki.ubuntu.com/ReportingBugs.

affects: ubuntu → dmraid (Ubuntu)
Revision history for this message
Danny Wood (danwood76) wrote :

I think your issue may be caused by the fact that dmraid cannot handle two seperate arrays on one disk set.

The best way to do it is have the RAID0 set as the entire set (all 4.5TB) and then partition into smaller chunks.
Please try this first!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I done . I tryed 1 raid array last week.
In this case, Linux(fedora/ubuntu) couod not regognize ALL capacity.
so I spilitted 2 arrays.

thanks.

Revision history for this message
Danny Wood (danwood76) wrote :

Ok,

Could you please output the result of the following command from a live session:
sudo dmraid -ay -vvv -d

Can windows see the 2.5TB drive ok?

Revision history for this message
Danny Wood (danwood76) wrote :

Also what RAID controller is it?

Revision history for this message
Danny Wood (danwood76) wrote :

I've just had another thought.
Are you trying the 64-bit version of Ubuntu?

32-bit addressing will only go up to 2.2TB so it might be worth trying 64-bit Ubuntu instead.
2.5TB - 2.2TB = 300GB (Sound familiar?)

But dmraid will not handle two separate arrays on one disk.
If 64-bit sees the array just fine you should rearrange the array so its a complete 4.5TB.

Revision history for this message
Luke Yelavich (themuso) wrote : Re: [Bug 599255] Re: dmraid cannot use over 2TB raid0

I also believe for disks that size you need to use a GPT partition table.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote : Re: dmraid cannot use over 2TB raid0

Additional test:

I tryed another distro,yesterday.
In CentOS 5.5 DVD, all capacity was recognized to the second array. 2.5TB .

[CentOS inst ]said me "initialize this 2.5TB? so. you will use" but i did not.
because. there are 2.5TB NTFS made by windows XP. i need backup before
I was glad to listen this message.

It doesn't touch PC for about 24 hours because all arrays are backed up now for test.

Anything cannot be done now.
Please , wait me(backuping my pc).
all for test.

my fake raid: Asus M4a78-EM1394 onboard. SB7*0.(ATI/Promise)
                     AMD 780G+SB710
                     1.5TBx3=4.5TB raid0
disto : ubuntu 10.4 , fedora13 , centos5.5 , all x64 edition.

http://dl.dropbox.com/u/7882415/raid.jpg on win7u

others: windowsXP 32bit, Windows7 64bit ultimate (Both OSes can use all capacity 2.0TB & 2.5TB.
It is possible to use it without trouble. )

Revision history for this message
Danny Wood (danwood76) wrote :

Well we need to determine which software the bug is in.
How are you determining that Ubuntu and Fedora can't see the full amount?
E.g what program are you using for partitioning.

Could you please post the output of (in ubuntu and centos):
sudo dmraid -ay -vvv -d

This will help us!

Revision history for this message
Danny Wood (danwood76) wrote :

I have just been looking at the sources for the centos dmraid package for the pdc. (Promise controller)
The only difference is a Raid10 patch but won't affect you as you are not using that.

I am inclined to believe the bug is in whatever program you are using for partitioning.

The dmraid debug output (previous post) will shed more light on this!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :
Download full text (6.9 KiB)

I backuped ALL , almost 3TB.
I tryed test on CentOS again.
Oh, sorry > everyone.

on CentOS . I misread message size. CentOS had initialized only 0.3GB (2nd array 2.5TB).
CentOS initialized only 286079MB on 2.5TB.
every linux have same situations for me.

I paste dmraid -ay -vvv -d

 pdc_cdfjcjhfhe :1st array 2.0TB
  pdc_cdgjdcefic :2nd array 2.5TB

sudo dmraid -ay -vvv -d (ubuntu 10.4 x64)
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
DEBUG: not isw at 1500301908992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 1500300827136
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: pdc metadata discovered
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
DEBUG: not isw at 1500301908992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 1500300827136
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
DEBUG: not isw at 1500301908992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 1500300827136
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: not found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: not found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: not found pdc_cdgjdcefic
DEBUG: _find_set: not found pdc_cdgjdcefic
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: not found pdc_cdgjdcefic
NOTICE: added /dev/sdc to RAID set "pdc_cdfjcjhfhe"
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: found pdc_cdgjdcefic
DEBUG: _find_set: searching pdc_cdgjdcefic
DEBUG: _find_set: found pdc_cdgjdcefic
NOTICE: added /dev/sda to RAID set "pdc_cdfjcjhfhe"
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdfjcjhfhe
DEBUG: _find_set: found pdc_cdfjcjhfhe
DEBUG: _find_set: searching pdc_cdgjd...

Read more...

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

it is a snapshot of AMD RAID expert software on XP.

http://dl.dropbox.com/u/7882415/raid-amd.JPG

I'll be very happy if I can serve you

Revision history for this message
Danny Wood (danwood76) wrote :

OK well the debug stuff all looks fine.

Could you also post the output of `dmraid -s` (this will list all disk sizes and status seen by dmraid)

What software are you trying to partition with?
And how are you working out how much disk they are seeing?

Also can you post the output of an fdisk list for both arrays.
So:
fdisk -l /dev/mapper/pdc_cdfjcjhfhe
fdisk -l /dev/mapper/pdc_cdgjdcefic

Revision history for this message
Danny Wood (danwood76) wrote :

The AMD RAID control panel has nothing to do with linux.

To debug the issue I need the output of the commands I have asked for.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

sudo fdisk -l /dev/mapper/pdc_cdgjdcefic
GNU Fdisk 1.2.4
Copyright (C) 1998 - 2006 Free Software Foundation, Inc.
This program is free software, covered by the GNU General Public License.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

Error: ディスクの外側にパーティションは作れません。 <--japanese message
---
maybe (Partition outside the disk!)

fdisk -l /dev/mapper/pdc_cdfjcjhfhe
GNU Fdisk 1.2.4
Copyright (C) 1998 - 2006 Free Software Foundation, Inc.
This program is free software, covered by the GNU General Public License.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

Disk /dev/mapper/pdc_cdfjcjhfhe: 1999 GB, 1999993282560 bytes
255 heads, 63 sectors/track, 243152 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

                     Device Boot Start End Blocks Id System
/dev/mapper/pdc_cdfjcjhfhe1 * 1 6646 53383963 7 HPFS/NTFS
/dev/mapper/pdc_cdfjcjhfhe2 6647 13276 53247442 7 HPFS/NTFS
/dev/mapper/pdc_cdfjcjhfhe3 13277 217024 1636597777 5 Extended
/dev/mapper/pdc_cdfjcjhfhe5 13277 20252 56026687 7 HPFS/NTFS
Warning: Partition 5 does not end on cylinder boundary.
/dev/mapper/pdc_cdfjcjhfhe6 20253 197312 1422226417 7 HPFS/NTFS
Warning: Partition 6 does not end on cylinder boundary.
/dev/mapper/pdc_cdfjcjhfhe7 197313 199915 20900565 83 Linux
Warning: Partition 7 does not end on cylinder boundary.
/dev/mapper/pdc_cdfjcjhfhe8 199916 217024 137420010 83 Linux
Warning: Partition 8 does not end on cylinder boundary.

$ sudo dmraid -s

*** Active Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
*** Active Set
name : pdc_cdgjdcefic
size : 585891840
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0

sudo dmraid -r
/dev/sdc: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sdb: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sda: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0

there are no 2nd array(pdc_cdgjdcefic).

sudo dmraid -ay
RAID set "pdc_cdfjcjhfhe" already active
RAID set "pdc_cdgjdcefic" already active
ERROR: dos: partition address past end of RAID device
ERROR: dos: partition address past end of RAID device
RAID set "pdc_cdfjcjhfhe1" already active
RAID set "pdc_cdfjcjhfhe2" already active
RAID set "pdc_cdfjcjhfhe5" already active
RAID set "pdc_cdfjcjhfhe6" already active
RAID set "pdc_cdfjcjhfhe7" already active
RAID set "pdc_cdfjcjhfhe8" already active

gparted results png
http://dl.dropbox.com/u/6626165/raid01.png
http://dl.dropbox.com/u/6626165/raid-02.png

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

>What software are you trying to partition with?

On Linux, fundamentally I use "gparted" or "ubuntu installer" for resize,create,delete.

>And how are you working out how much disk they are seeing?

how to see my partition.

on Fedora, just type.
$gparted

on Ubuntu
$gparted /dev/mapper/pdc_raid
on ubuntu ,just type gparted, I see /dev/sda,b,c ,not raid array.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I tested acronis disk director 10.0.
linuxCDROM boot kernel 2.4.34

DD not support GPT.
So, I partitioned as 2TB EXT2 for 2nd array on DD.

XP & Win7 can read/write this EXT2 partition.
but ,ubuntu can not see.

Revision history for this message
Danny Wood (danwood76) wrote :

Right.
The bug is definitely in dmraid. (Disk size reported my dmraid is wrong by 32-bits probably due to truncation)

Could you post a metadata dump please. This will allow me to explore the metadata and see if something is different to what it expects.

To do a metadata dump run the following command:
dmraid -rD

In the directory where you are currently (in the terminal) a directory will be created labelled dmraid.pdc could you please tar (archive) this directory and attach it to this bug report.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

thank you danny

root@phantom:~# sudo dmraid -rD
/dev/sdc: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sdb: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
/dev/sda: pdc, "pdc_cdfjcjhfhe", stripe, ok, 1302083328 sectors, data@ 0
root@phantom:~#

cannot dump?

Revision history for this message
Danny Wood (danwood76) wrote :

It dumps to a directory without saying anything.
In your current working directory.

So for example if I run it in the terminal I get this output (I have jmicron and intel raids):
danny@danny-desktop:~/dmraid$ sudo dmraid -rD
/dev/sda: isw, "isw_bgafaifadd", GROUP, ok, 1465149166 sectors, data@ 0
/dev/sdc: jmicron, "jmicron_HD2", stripe, ok, 625082368 sectors, data@ 0
/dev/sdb: isw, "isw_bgafaifadd", GROUP, ok, 1465149166 sectors, data@ 0
/dev/sdd: jmicron, "jmicron_HD2", stripe, ok, 625082368 sectors, data@ 0
danny@danny-desktop:~/dmraid$

if I run an ls there are two directories:
danny@danny-desktop:~/dmraid$ ls
dmraid.isw dmraid.jmicron
danny@danny-desktop:~/dmraid$

These directories contain the metadata information (you will just have one pdc directory).

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I make out.
thank you.

Revision history for this message
Danny Wood (danwood76) wrote :

Annoyingly the metadata only contains the information for the first raid set (which is fine of course). The other set will be another metadata block.
We can dump this metadata manually but we need to know where its located.

To do this I will patch a version of dmraid that will output these locations and upload it to my ppa a bit later on.

I will let you know when this is done.

Revision history for this message
Danny Wood (danwood76) wrote :

Hi,

I have patched dmraid to show the metadata locations in the debug output (hopefully)
This can be found in my ppa https://launchpad.net/~danwood76/+archive/ppa-testing

Update the dmraid packages from my ppa and then run `dmraid -ay -d -vvv` again and post the output.
This should hopefully display the metadata locations that we can then dump from.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :
Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

>patch < dmraid_1.0.0.rc16-3ubuntu2ppa3~lucid1.diff
>patching file README.source
> ........
>patching file dmraid-activate
>patch: **** File dmraid is not a regular file -- can't patch

where I mistaked?

dmraid -V
dmraid version: 1.0.0.rc16 (2009.09.16)
dmraid library version: 1.0.0.rc16 (2009.09.16)
device-mapper version: 4.15.0

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I can not find dmraid_1.0.0.rc16.orig.tar.gz in your ppa page.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

root@phantom:/sbin# ls -l dmraid
-rwxr-xr-x 1 root root 26891 2010-07-03 03:15 dmraid

I do not understand well because I rarely use it(patch). It is difficult for me to patch.

Possible deb.form. or (patched)source code..wholly..
get.. I will be glad.

The following can be done if it does so. ./configure ,make ,make install.

Revision history for this message
Danny Wood (danwood76) wrote :

Hi,

Unfortunately the package is still waiting to be built by the Ubuntu servers.
It should be complete in 7 hours from now, it seems there is quite a queue for building.

You can check the progress by looking on the ppa page (https://launchpad.net/~danwood76/+archive/ppa-testing) and checking the activity log (currently it says "1 package waiting to build").

To install my ppa just run `sudo add-apt-repository ppa:danwood76/ppa-testing`
Then do a `sudo apt-get update` then a `sudo apt-get upgrade` (after the package has been built by Ubuntu)

Once the updates have been installed post the output of `dmraid -ay -d`

thanks!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

maybe, it is ok?
I coud upgrade.

ls -l dmraid
-rwxr-xr-x 1 root root 21272 2010-05-30 17:58 dmraid

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

oh, I think that I must wait for building.

dmraid -V
dmraid version: 1.0.0.rc16 (2009.09.16) shared
dmraid library version: 1.0.0.rc16 (2009.09.16)
device-mapper version: 4.15.0

Revision history for this message
Danny Wood (danwood76) wrote :

Hi,

The 64-bit version has now been built.
So you should be able to upgrade.

To check the installed version you can use dpkg, the program version will always remain the same.
dpkg -p dmraid | grep Version

Once upgraded it should output:
danny@danny-desktop:~$ dpkg -p dmraid | grep Version
Version: 1.0.0.rc16-3ubuntu2ppa3~lucid1

Then please post the output of
`sudo dmraid -ay -d -vvv`

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

# dpkg -p dmraid | grep Ver
Version: 1.0.0.rc16-3ubuntu2ppa3~lucid1
root@phantom:/home/ehhen# dmraid -ay -d -vvv >dmraid-ay-d-vvv.txt

Revision history for this message
Danny Wood (danwood76) wrote :

Hmmm. It didn't quite output what I wanted, sorry about that.

I have made another patched version which is more verbose and should show each meta location it tries (weather it finds it or not).

Unfortunately there is a bit of a wait in the ppa build queue at the moment.
The new version is 1.0.0.rc16-3ubuntu2ppa4~lucid1
The status of the build can be found here:
https://launchpad.net/~danwood76/+archive/ppa-testing/+build/1851805

I will be away for tomorrow but it would be good if you could post the "dmraid -ay -d -vvv" output again once the package has updated.

Thanks!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

how to treat *.udev files? there(2files) could not install by dependency.
another 3 .deb could be install.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

Awareness:

Current, Only just type gpated. I can see /dev/mapper/pdc----. it's good .

Current my partition:

1st array(2TB)
/dev/mapper/pdc_cdfjcjhfhe1 * 1 6646 53383963 7 HPFS/NTFS /primary
/dev/mapper/pdc_cdfjcjhfhe2 6647 13276 53247442 7 HPFS/NTFS /primary
/dev/mapper/pdc_cdfjcjhfhe3 13277 217024 1636597777 f Extended LBA
/dev/mapper/pdc_cdfjcjhfhe5 13277 20251 56018655 7 HPFS/NTFS
/dev/mapper/pdc_cdfjcjhfhe6 20252 197311 1422226417 7 HPFS/NTFS
/dev/mapper/pdc_cdfjcjhfhe7 197312 199915 20908597 83 Linux(fedora)
/dev/mapper/pdc_cdfjcjhfhe8 199916 217024 137420010 83 Linux(ubuntu)
/dev/mapper/pdc_cdfjcjhfhe4 217025 243152 209865127 83 Linux(ubuntu) /primary

2nd-array:pdc_cdgjdcefi(2.5TB)

yesterday I partationed 2 part for 2nd array as GPT on Win7x64.
I can not see /dev/mapper/pdc_cdgjdcefic by fdsik command.
I can see 279.37GiB(blanked space) on gparted.

Another Awareness:

You may not matter, because some might be useful, to note.

sometime,there are strange situation in /dev/mapper/
/dev/mapper/pdc_cdfjcjhfheX
/dev/mapper/pdc_cdfjcjhfhepX

Case of adding "p" , ubuntu and kubuntu installer Failure to ensure.
So, eg on /dev/mapper/pdc_cdfjcjhfhe4 , I copied another ubuntu system parttition ,
and edited grub.cfg manually. and change uuid and some another check.......
thus I could boot Primary4 on 1st array.

Revision history for this message
Danny Wood (danwood76) wrote :

The version of gparted in my ppa doesn't rely on kpartx like the repository version. It should leave dev names alone but the repository version seems to screw them up adding a p in sometimes.
The next version of dmraid will leave the p in there, there is a discussion on this in this bug: https://bugs.launchpad.net/ubuntu/+source/gparted/+bug/554582

In addition to this ubiquity (the installer) doesnt seem to be able to repartition dmraid drives at all. Its best to create the partitions using gparted and install without modifying the partition table.

Back to the bug!

The new output gives me exactly what I want to know! (finally)

I've written a script which dumps the data at those locations and then compresses them.

Revision history for this message
Danny Wood (danwood76) wrote :

To use the script open a terminal and make a clean directory to work in and place the thh dump-pdc-metadata.sh in there (extracted from the archive I uploaded).
Make the script executable and then run it.

chmod a+x dump-pdc-metadata.sh
./dump-pdc-metadata.sh

It will ask you for your password as dd will require root permissions.
Once the script has finished you will be left with a metadata.tar.gz file in that directory.
Please upload this as this is the metadata I require.

Thanks!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

is this ok?
metadata.tar.gz

Revision history for this message
Danny Wood (danwood76) wrote :

Yep that's perfect.
The second metadata chunk is there for me to investigate.

I will let you know when I find a solution.
Thanks!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I'm relieved. Slowly, I'll wait.
Thank you,all.

Revision history for this message
Danny Wood (danwood76) wrote :

Hi Nish,

I think I have found what I needed and I have made a patched version of dmraid and uploaded to my ppa.
This will affect the way that it detects the pdc raids so please make sure you have everything backed up (its probably best done from a live CD).

If it works as I expect it will you should now be able to see your full 2.5TB array.
After installing could you please post the output of `dmraid -ay -d` and `dmraid -s`

If this patch does work I do need more testing.

If you are able to it would be nice if you could destroy your two arrays (this will obviously destroy all your data) and create a single total 4.5TB array. This is because I am unsure that I have found the correct data location in the metadata and a 4.5TB array would indicate if I have found it or not.

If you do decide to create the 4.5TB array could you also upload the metadata dumped using dmraid. (whether it works or not)

Thanks,
Danny

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

Hi Danny,

raid0-1st array:
Using /dev/mapper/pdc_cdfjcjhfhe
Disk /dev/mapper/pdc_cdfjcjhfhe: 1999 GB, 1999993282560 bytes
255 heads, 63 sectors/track, 243152 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

→It is the same as former. It is good in this.

raid0-2nd array:
Using /dev/mapper/pdc_cdgjdcefic
Disk /dev/mapper/pdc_cdgjdcefic: 2498 GB, 2498996344320 bytes
255 heads, 63 sectors/track, 303819 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

→The total content is recognized. It has been improved.

It is wonderful. Thank you.
Anyway,
Please wait a little.

It initializes again on ubuntu after it backs up, and ALL ext4 is formatted though it makes to GPT with Win7x64 and tentative is used now. Though time might not be able to be taken until the weekend.

The backup is ahead in either case.

OR, to use 1Array is betere than 2arrays?
My Legasy XP need 100GB of 4.5TB. It is painful.

Danny Wood (danwood76)
summary: - dmraid cannot use over 2TB raid0
+ dmraid fails to read promise RAID sector count larger than 32-bits
Revision history for this message
Danny Wood (danwood76) wrote :

Hi,

You can test the 4.5TB array in your own time. I am a patient man!

Also could you please post the output of `dmraid -s` with the current patch?
I want to check some numbers.

In the end you might be better off with 2 RAID arrays as the first array will be quicker as it will be closer in on the disks surface.

To get the patch accepted upstream (and included in the release version) I need to test on larger disks to make sure the metadata offset I have chosen is the correct one and not just a random location. This patch could be seen in the next Ubuntu release if I can test it well enough.

Thank you for testing you have been a great help.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I initialized 2 pattern, guid , mbr on 2.5TB 2nd array. by Gparted.

on Gparted.
A)Case of GUIDed , ubuntu could not partation. it failed.
B)Case of MBR, ubutn could partition. it was success.

case a:
there are no files pdc_cdgjdcefic1,2. so gparted failed.

# dmraid -s
*** Active Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
*** Active Set
name : pdc_cdgjdcefic
size : 4880859264
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

On guided partition, ubuntu can not partition.
--
a) disk utility
Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/mapper/pdc_cdgjdcefic, start=0, size=100000000000, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
Entering MS-DOS parser (offset=0, size=2498999943168)
MSDOS_MAGIC found
found partition type 0xee => protective MBR for GPT
Exiting MS-DOS parser
Entering EFI GPT parser
GPT magic found
partition_entry_lba=2
num_entries=128
size_of_entry=128
Leaving EFI GPT parser
EFI GPT partition table detected
containing partition table scheme = 3
got it
got disk
new partition
Error: Unable to satisfy all constraints on the partition.
ped_disk_add_partition() failed

----
b) GParted 0.5.1

Libparted 2.2
/dev/mapper/pdc_cdgjdcefic1 を ext2 でフォーマット 00:00:01 ( エラー/error )

/dev/mapper/pdc_cdgjdcefic1 を補正 00:00:00 ( 成功/suceed )

パス: /dev/mapper/pdc_cdgjdcefic1
開始位置: 34
終了位置: 204989399
容量: 204989366 (97.75 GiB)
/dev/mapper/pdc_cdgjdcefic1 のパーティションの種類を設定 00:00:00 ( 成功/suceed )

新しいパーティションの種類: ext2
(new partition )
ext2 の新規ファイルシステムを作成する 00:00:01 ( エラー/error )
                                                  (create)
mkfs.ext2 -L "" /dev/mapper/pdc_cdgjdcefic1

mke2fs 1.41.11 (14-Mar-2010)
Could not stat /dev/mapper/pdc_cdgjdcefic1 --- No such file or directory

The device apparently does not exist; did you specify it correctly?

========================================

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

So,
I initilized GPT,
I useed fdisk to partitioning. 3partitons, EXT4,EXT4,swap.
But these partitions were not recognize on Windows7x64.
not comaptible?!
Windows7 Diskmanager tell me . Which type do you initialize GPT. or MBR?

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

And, onWin4x64.
I initilized GPT,
I partitioned 2 NTFS part.
But these partitions are not recognize on Ubuntu, too.
GPT was not compatible.
DiskUtility shows me, not formatted 2.5TB. not GPT, not MBR.

add:

Disk utility could not format GPT/MBR to 2.5TB. it failed. Gparted could format partation table GPT/MBR.

[Diskutility error mes.]
Error creating partition table: helper exited with exit code 1: In part_create_partition_table: device_file=/dev/mapper/pdc_cdgjdcefic, scheme=3
got it
got disk
committed to disk
BLKRRPART ioctl failed for /dev/mapper/pdc_cdgjdcefic: Invalid argument

Revision history for this message
Danny Wood (danwood76) wrote :

Hi,

Windows cannot use EXT4 unfortunately.

Also the ubuntu installer (ubiquity) can't use dmraid. There is a bug somewhere on this.
Disk utility also seems to have issues with my dmraid devices, my volumes are listed twice, one saying free and the other with the partitions.

Are you sure the partitions you created in windows did not exist in Ubuntu?
Could you recreate them and then do a print screen of gparted looking at the drive?

Its also possible there is an issue with the size of the calculated volume from the dmraid metadata.

thanks!

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

>Are you sure the partitions you created in windows did not exist in Ubuntu?
>Could you recreate them and then do a print screen of gparted looking at the drive?

Yes, in ubuntu, There are no partition. (Perhaps, I could not see partition table).

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

this snapshot which createed mbr on win7x64.
now, I can see this partation table . but size are misread.

I made 2048GB NTFS and other 279.37GB(not partationed).
but ubuntu gparted show me that 1024GB and blank 1.27TiB.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

Seeing in Ubuntu

Revision history for this message
Danny Wood (danwood76) wrote :

It seems the partition table is getting corrupted, possibly by an incorrect offset being used.
I would like you to dump the MBR created by both windows and Ubuntu.

First in windows create an MBR structure with the partitions as you just have then boot into Ubuntu.
When in Ubuntu run the following command to dump the MBR:
sudo dd if=/dev/mapper/pdc_cdgjdcefic of=winmbr.img bs=512 count=1

Then open up gparted and create a new partition structure (msdos), setup the same partitions and run this command:
sudo dd if=/dev/mapper/pdc_cdgjdcefic of=linmbr.img bs=512 count=1

This will leave you with two MBRs dumped in the form of .img files. Could you please tar (archive) these and upload them for me to analyse.

Thanks.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

Thanks ,

about my 2nd 2.5TB RAID0.

1st , boot Win7x64, and created NTFS limit(automatic 2048GB).
next boot ubuntu,
I created manulaly 2TB NTFS(and other blank 239GB(286079MiB)) by Gparted.
and next, boot Win7x64 again.
In disk management shows me , 1 partition as 2.5TB RAW disk(not formated).

By the way,
I would have to face what caused the bug?
AMD SB700/800(promise)? DMRAID? GPARTED? kernel?

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I had Miscalculation.
2097149MiB to 2097152MiB.
The latter setting the new.
I recreated partition. and recreated mbr.

Disk /dev/mapper/pdc_cdgjdcefic: 2498 GB, 2498996344320 bytes
255 heads, 63 sectors/track, 303819 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

                     Device Boot Start End Blocks Id System
/dev/mapper/pdc_cdgjdcefic1 1 267349 2147480811 7 HPFS/NTFS
/dev/mapper/pdc_cdgjdcefic2 267350 303819 292937242 83 Linux

Revision history for this message
Danny Wood (danwood76) wrote :

Hmmm, that is interesting.
Both MBRs have the same structure, which means the offset is correct.

I can see one issue though.

In the windows MBR the sector size is listed as 0x7FFFF800 = 2147481600 sectors. The normal block size is 512 bytes so 2147481600 x 512 = 1099510579200 = 1TB (This is what Gparted is reading)

In the Linux MBR the sector size is listed as 0xFFFFE9D6 = 4294961622 sectors. Multiplying by the normal block size 4294961622 x 512 = 2199020350464 = 2TB.

Obviously there is a difference between the MBR block sizes, I think for some unknown reason Microsoft is using a larger sector size which shouldn't be allowed. The 512 byte limit is the limit of the MBR system.

I never rely on Microsoft partitioning tools as they have a very bad reputation and history. Its best to do all your partitioning with one program so that the structure stays consistent. I normally use gparted. With the NTFS partition you created in Ubuntu is the same partition then visible in windows?

Anyway you do need to use gpt partitioning to use large volumes like this. In gparted can you create a gpt partition structure and create the same NTFS volume and see if it is visible in windows? (In Gparted do Device -> New Partition Structure. Click advanced and change from msdos to gpt)

Revision history for this message
Danny Wood (danwood76) wrote :

Oh dear.

It seems this version of dmraid won't handle gpt!
So you may be a little stuck with using partitions of that size.

There is a thread here where someone has made a patch: http://ubuntuforums.org/showthread.php?t=1369224
I will have a look at it later to see if I can incorporate it into my dmraid in my ppa!

Revision history for this message
Danny Wood (danwood76) wrote :

I have done some further digging and it seems that kpartx can read the gpt partition table from dmraid.
(sudo apt-get install kpartx)

Usage:
kpartx -a /dev/mapper/pdc_cdgjdcefic

Use that command once you have booted or created the gpt structure and you should then have the /dev/mapper/X block devices. This is a bit of an ugly hack but it might be a workaround until dmraid can read gpt.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

>With the NTFS partition you created in Ubuntu is the same partition then visible in windows?
Previously, as written.

on Raid array1, Visible,I can use/read/write NTFS which was created by ubuntu in windows. no problem.Interoperability,each other,ubuntu and windows/XP/7.

on Raid array2, I can not know partitons. It was one "RAW partition" in Windows.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

>Anyway you do need to use gpt partitioning to use large volumes like this.

Previously, as written.
ubuntu can not create GUID partition in my 2nd array.
gparted . and diskutility too.

the other ubuntu can make guid partition table.

(parted) p
model: Linux device-mapper (striped) (dm)
disk /dev/mapper/pdc_befgjjibfc: 2301GB
sector size (log/ph): 512B/512B
partition table: gpt

番号 開始 終了 サイズ ファイルシステム 名前 フラグ
 1 17.4kB 2301GB 2301GB

Currently. my mappername was change. I made resetting raid size.
1st 2.0TiB MBR max.
2nd 2.09TiB

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

(using kpartx which your procedure)
ubutu can create partition on GPT disk(2nd) by gparted.

Disk /dev/mapper/pdc_befgjjibfc: 2300 GB, 2300997404160 bytes
255 heads, 63 sectors/track, 279747 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

                     Device Boot Start End Blocks Id System
/dev/mapper/pdc_befgjjibfc1 1 12398 99586903 83 Linux *1
Warning: Partition 1 does not end on cylinder boundary.
/dev/mapper/pdc_befgjjibfc2 12398 279748 2147488875 83 Linux
Warning: Partition 2 does not end on cylinder boundary.

*1 this partition is ntfs.

pdc_bedhddhdfc 1st aray 2.0TiB
pdc_befgjjibfc 2nd array 2.09TiB

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

And reboot, swicth to win7x64.
Diskmanager told me. Which type initialize GPT or MBR. there are no partition table.

#46 same situation.I try again and again.

And
Going around in circles.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

And, I initilized as windows told me. MBR.
I created A 2TiB partiation.
I decided to throw away the fraction. 0.09TiB.

and reboot, swich to ubuntu.
#gpated
Strange phenomenon occurs.
There are GPT partiton.
http://dl.dropbox.com/u/6626165/Screenshot--dev-mapper-pdc_befgjjibfc%20-%20GParted.png
Even though the initialization information is not lost Partitions.

## fdisk /dev/mapper/pdc_befgjjibfc
GNU Fdisk 1.2.4

Warning: /dev/mapper/pdc_befgjjibfc contains GPT signatures, indicating that it
has a GPT table. However, it does not have a valid fake msdos partition table,
as it should. Perhaps it was corrupted -- possibly by a program that doesn't
understand GPT partition tables. Or perhaps you deleted the GPT table, and are
now using an msdos partition table. Is this a GPT partition table?
   y Yes
   n No

I choice N. then, fdisk done.
 Going around in circles.

so, I choice Y.
I done.

next initialized mbr by gparted. I create A ntfs 2TiB . labeled R2D1.
http://dl.dropbox.com/u/6626165/gp001.png
http://dl.dropbox.com/u/6626165/gp002.png

I mounted R2D1. I put 16 pictures.

Continue...

---
Currently, MBR and GPT For both cases.
Have a problem either. Obvious?

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I boot WinXP32.
on management of computer.
there are 2nd raid array. basic, norml.4095.99GB. <---Abnormal value.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

Next ,I boot Win7x64.
on management of computer.
there anre 2nd raid array. basic.normal 2142.97GB RAW DISK. not ntfs. so I can not see my pictures which put on ubuntu.

--
World Cup TV but also watch because, abacus it, I should go back to its core business, next monday.
Very grateful to Danny. Really sorry, I want some time for biz.
and go back. My 90% job is handled by Ubuntu.

Revision history for this message
Danny Wood (danwood76) wrote :

Hmmm.

I think there is an issue with the offset in that case.
And earlier I was fooling myself by reading the same MBR back twice.

Its hard to reverse engineer over a long distance. If I could find a cheap promise controller I would buy one to have a go at fixing this but unfortunately it looks like we have hit a snag. Without being able to access the disks and the hardware directly I don't think we can go much further.

I have done some digging and it appears that the promise fakeraid controllers had issues with the 32-bit LBA (which is the issue here) and this is why I cannot really see how to fix it easily. They fixed it with a patch in their drivers and firmware. But they are a very closed source group and I couldn't find any more info. I have emailed them but I don't expect a response.

Sorry but it looks like we aren't going to solve this issue.

As a workaround you could create a third RAID disk. So you have 2 x 2TB and 1 x 500GB drives. This will allow you to see all drives in all OSes and have the speed advantage of RAID0.

Revision history for this message
Danny Wood (danwood76) wrote :

For completeness I am attaching the debdiff for my attempt at enabling the extended LBA.
I think the offset is wrong but it is documented in the header file.

I am sorry we could not fix this.

tags: added: patch
Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

thanks.
>So you have 2 x 2TB and 1 x 500GB drives.
Yes, that is good your idea. However,I tryed last week.
I Can create two array, can not create three array.
It is limitatations of this raid-bios. only two array. (this asus motherboard)

Maybe I saw hope.

I initialized 2.09TiB as mbr. and create 2048GB/NTFS and the extra 94.96GB/NTFS.
these two partation can read/write/use wintx64 and xp32. no problem.

http://dl.dropbox.com/u/7882415/2nd-win.PNG

I reboot ,now. to switch ubuntu.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

On ubuntu.gparted .
http://dl.dropbox.com/u/6626165/Screenshot--dev-mapper-pdc_befgjjibfc%20-%20GParted.png

Numbers are just half the number originally to be presented. This is simply a miscalculation. MBR is correct.

Revision history for this message
Danny Wood (danwood76) wrote :

Well obviously something isn't reading the MBR correctly.
The MBR is read by the dmraid code so this could be why.

Could you dump the MBR again and post it up?
sudo dd if=/dev/mapper/pdc_cdgjdcefic of=linmbr2.img bs=512 count=1

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I wonder that "MBR is correct." is my opnion.
sudo dd if=/dev/mapper/pdc_befgjjibfc of=linmbr2.img bs=512 count=1

In this my environment ,

Sometimes strange phenomenon occurs nearly 1st july .
The command(sudo grub-install /dev/mappaer/myraid1 ) may be viable, sometimes not.
The operation is not constant.

Now, it is execute.
eg.
sudo grub-install /dev/mapper/pdc_bedhddhdfc
sudo grub-install /dev/mapper/pdc_befgjjibfc
same output.

You have a memory leak (not released memory pool):
 [0x259c8f0]
You have a memory leak (not released memory pool):
 [0x23848f0]
You have a memory leak (not released memory pool):
 [0x21e2150]
You have a memory leak (not released memory pool):
 [0x235b150]
You have a memory leak (not released memory pool):
 [0x1f228f0]
You have a memory leak (not released memory pool):
 [0x1141170]
Installation finished. No error reported.

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

add information:

I swaped motherboard from asus M4A78-EM/1394 to asus M3A78-T.
There have almost compatible raid chip SB7xx.

asus M4A78-EM/1394 AMD790GX +SB750 raidbios 3.0.1540.59 and 3.0.1540.39(both I tested).
asus M3A78-T AMD780G +SB710 raidbios now,I can not see.

I get same Gpated's snapshot on 2nd array ,M3A78-T. JUST HALF SIZE.
(both motherboard had updated latest bios.)

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I swaped motherboard, again.restored to
and Correction

RAIDBIOS VERSION
M4A78-EM/1394 AMD790GX +SB750 raidbios 3.0.1540.39
M3A78-T AMD780G +SB710 raidbios 3.0.1540.39 3.0.1540.59

Revision history for this message
Nishihama Kenkowo (hitobashira) wrote :

I had seen SB7*0 debelopment guide. it was difficult for me to undaerstand.
http://support.amd.com/us/Embedded_TechDocs/43366_sb7xx_bdg_pub_1.00.pdf

By the way. These bugs are in All Linux distro(CentOS,Redhat6.0 beta&Current,Fedora).
And like Acronis products.
if No one would do the bug fix. Cannot our normally use it through all eternity?

Is mentainaner of dmraid on the register in Redhat?

Thinking the blessing to the weather if the level is waited during half a year(or 2 years) is one judgment.

However, is there a possibility to be left, too?There is a possibility that time solves in terms of the viewpoint of marketing because making HD a large capacity is becoming common sense.

I do not understand though there are feelings that understand.

When I do very, am I good?Unfortunately, I have two SB7** motherboards, and it doesn't have the funding ability that can be bought additionally now.

Or, will it be the same also in other mother boards like intel-chip as long as DMRAID is used?

Only even distinguishing from the bug for me is happy.

thanks,All.

Revision history for this message
Phillip Susi (psusi) wrote :

Danny, it sounds like you found and fixed the problem from your comments. Can you post the patch so we can put this one to bed?

tags: removed: 2tb dmraid patch
Changed in dmraid (Ubuntu):
status: New → Triaged
importance: Undecided → Medium
Revision history for this message
Danny Wood (danwood76) wrote :

Unfortunately no I didn't. I don't have the actual promise hardware and so debugging this issue was very hard.
Nishihama Kenkowo helped me a lot but I never completed the work. Debugging hardware is much easier when it is sat infront of you.

I think I was close but I decided to give up as I couldn't test what I had done.

I will have a look through that documentation, if it is what I read before then it is most likely just from the bios point of view and nothing to do with the metadata, I was planning on buying an SB7xx motherboard but didn't find the time.

Revision history for this message
Andreas Allacher (ghost-zero5) wrote :

Is my issue related to this bug?
I am using the SB750 in RAID mode as I have one RAID1 array. All other drives are "normal" drives.
I now tried to add a "normal" 3TB drive and although I am able to create a GPT and a partition on Windows or Linux, the GPT isn't non-existent in the other OS, e.g. if I create this in Ubuntu, Windows doesn't find any partition table and vice versa.

Is so, how much longer do you think this bug fix will take?

Revision history for this message
Andreas Allacher (ghost-zero5) wrote :

Btw. if my bug is really related to this then I doubt that it is a dmraid bug but more likely a kernel driver issue..?

Revision history for this message
Danny Wood (danwood76) wrote :

If the 'normal' drive doesn't have any raid metadata, ie not been used in a fakeraid before, then you shouldn't suffer from this bug.
This bug is primarily to do with the metadata not being read properly by dmraid and so the device isn't exposed properly to the rest of the system.

In particular the msb of the sector count is not read properly and so you end up with odd drive capacitys.

The bug will take forever at the moment.
I don't have this piece of hardware and I don't have the relevant documentation (AMD won't release the info)!
I doubt I will have a promise RAID any time soon unless I manage to get a PCI card cheap!

I could setup a test solution but I dont have spare 2TB drives either.

Revision history for this message
Phillip Susi (psusi) wrote :

Danny, you don't need a 2tb drive to debug this. You can either use a virtual machine or the loopback driver. I didn't notice that there was a metadata sample attached to this bug report. I might take a look at it.

Revision history for this message
Phillip Susi (psusi) wrote :

The metadata in dmraid-pdc.tar is for an array that is smaller than 2TB, so does not overflow the 32bit sector count. Do we not have a sample of the metadata from a raid suffering from this problem?

Revision history for this message
Danny Wood (danwood76) wrote :

I was wondering if that was possible.

metadata.tar.gz is a full dump.
You should see a second raid set which is 300 GB or so, this is supposed to be 2.5TB but has the top bits truncated.

With my patch it detects the raid set correctly but windows was using a larger sector size and so Ubuntu and Windows disagreed about the MBR on the disk.
That is as far as we got and where my debdiff should finish.

Revision history for this message
Phillip Susi (psusi) wrote :

I am not sure what you mean by second raid set. The pdc format only defines a single raid set with up to 8 disks. In the original size I see 00a5 d4e8, and at offset 232 where your patch defines to be the upper 16 bits I see 00 00.

Revision history for this message
Phillip Susi (psusi) wrote :

Ok, I think I am starting to see now. This PDC format is just really bad. Instead of having a single record that can define more than one array, and specifies the region of interest on each component disk like some of the more sane formats, it just defines additional complete records with all of the unused space that each one has, and raid.start defines the starting offset of the array, for all disks ( so they all start at exactly the same spot ). dmraid-pdc.tar appears to have been created by dmraid but it only dumped the first record. Your script dumped all 3 and they appear to be in metadata.tar.gz, but without the corresponding .offset files. I can't work out the original locations of each of the 3 records so that I can place them at the proper offset.

Revision history for this message
Phillip Susi (psusi) wrote :

Nevermind, I actually read the script and figured it out. Maybe you should forward the patch upstream for review?

Revision history for this message
Phillip Susi (psusi) wrote :

Looks like it works:

*** Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
*** Set
name : pdc_cdgjdcefic
size : 585891840
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0

Revision history for this message
Phillip Susi (psusi) wrote :

There seem to be some unrelated changes that should be discarded:

1) You add 21_fix_jmicron_naming.patch to debian/patches/series
2) autoconf/config.sub and config.guess were touched, probably from autoreconf

I just wanted to make sure that these weren't intentional.

Revision history for this message
Danny Wood (danwood76) wrote :

Hi Phillip,

Sorry for the late response, I don't get much time for launchpad these days.

The jmicron name fixing patch is because I have jmicron raid on my testing machine and its running 10.04.
Interestingly I tried 10.10 the other day and that patch had been dropped. I think my jmicron patch was accepted upstream though so no matter.

This patch has only been tested with this one RAID set and so the offset for the extra bits may not be correct.
Ideally large sets need to be created and tested in windows and with this patch, I could try and emulate this I guess.
Does a virtual windows XP like fake fakeraid disks in the same way that linux does?

Revision history for this message
Phillip Susi (psusi) wrote :

No, the windows fakeraid drivers load and bind only to the specific fakeraid hardware they were designed for. I suppose if you can configure the virtual machine to use the correct PCI ID of the fakeraid instead of the usual generic ACHI ID then it should work.

I think I'm going to clean this patch up a bit, and add my own to fix pdc to correctly dump and display the extension records instead of just the primary record, and forward them upstream if you don't mind.

Revision history for this message
Danny Wood (danwood76) wrote :

I don't mind at all Phillip.
Do what you like!

Revision history for this message
Phillip Susi (psusi) wrote :
Download full text (3.5 KiB)

I must have been drunk by the time I posted that last night. I got the same wrong results as Nishihama. I've cleaned up the patch today and added my own and now I get correct results:

dmraid -s

*** Set
name : pdc_cdfjcjhfhe
size : 3906249984
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0
*** Set
name : pdc_cdgjdcefic
size : 4880859264
stride : 128
type : stripe
status : ok
subsets: 0
devs : 3
spares : 0

Note, the 4.8M instead of 585k size.

I have also fixed dmraid -n to display both detected records:

/dev/dm-5 (pdc):
0x000 promise_id: "Promise Technology, Inc."
0x018 unknown_0: 0x20000 131072
0x01c magic_0: 0x4c261ec7
0x020 unknown_1: 0x21f4 8692
0x024 magic_1: 0x4c261ec7
0x028 unknown_2: 0x21f4 8692
0x200 raid.flags: 0xfdfeffc0
0x204 raid.unknown_0: 0x7 7
0x205 raid.disk_number: 0
0x206 raid.channel: 0
0x207 raid.device: 0
0x208 raid.magic_0: 0x8b1c0626
0x20c raid.unknown_1: 0xf 15
0x210 raid.start: 0x0 0
0x214 raid.disk_secs: 1302083328
0x218 raid.unknown_3: 0xffffffff 4294967295
0x21c raid.unknown_4: 0x1 1
0x21e raid.status: 0xf
0x21f raid.type: 0x0
0x220 raid.total_disks: 3
0x221 raid.raid0_shift: 7
0x222 raid.raid0_disks: 3
0x223 raid.array_number: 0
0x232 raid.total_secs_h: 0
0x224 raid.total_secs_l: 3906249984
0x228 raid.cylinders: 65534
0x22a raid.heads: 254
0x22b raid.sectors: 63
0x22c raid.magic_1: 0x8ca00626
0x230 raid.unknown_5: 0xf 15
0x234 raid.disk[0].unknown_0: 0x7
0x236 raid.disk[0].channel: 0
0x237 raid.disk[0].device: 0
0x238 raid.disk[0].magic_0: 0x8b1c0626
0x23c raid.disk[0].disk_number: 15
0x240 raid.disk[1].unknown_0: 0x207
0x242 raid.disk[1].channel: 1
0x243 raid.disk[1].device: 0
0x244 raid.disk[1].magic_0: 0x8b1c0626
0x248 raid.disk[1].disk_number: 65551
0x24c raid.disk[2].unknown_0: 0x407
0x24e raid.disk[2].channel: 2
0x24f raid.disk[2].device: 0
0x250 raid.disk[2].magic_0: 0x8b1d0626
0x254 raid.disk[2].disk_number: 131087
0x7fc checksum: 0x828b8e1c Ok
/dev/dm-5 (pdc):
0x000 promise_id: "Promise Technology, Inc."
0x018 unknown_0: 0x20000 131072
0x01c magic_0: 0xe1e2e3e4
0x020 unknown_1: 0xdddedfe0 3722371040
0x024 magic_1: 0xd9dadbdc
0x028 unknown_2: 0xd7d8 55256
0x200 raid.flags: 0xfdfeffc0
0x204 raid.unknown_0: 0x7 7
0x205 raid.disk_number: 0
0x206 raid.channel: 0
0x207 raid.device: 1
0x208 raid.magic_0: 0x8ca00626
0x20c raid.unknown_1: 0x100000f 16777231
0x210 raid.start: 0x4d9c3700 1302083328
0x214 raid.disk_secs: 1628062768
0x218 raid.unknown_3: 0xffffffff 4294967295
0x21c raid.unknown_4: 0x1 1
0x21e raid.status: 0xf
0x21f raid.type: 0x0
0x220 raid.total_disks: 3
0x221 raid.raid0_shift: 7
0x222 raid.raid0_disks: 3
0x223 raid.array_number: 1
0x232 raid.total_secs_h: 1
0x224 raid.total_secs_l: 585891968
0x228 raid.cylinders: 65534
0x22a raid.heads: 254
0x22b raid.sectors: 63
0x22c raid.magic_1: 0x8d390626
0x230 raid.unknown_5: 0xf 15
0x234 raid.disk[0].unknown_0: 0x107
0x236 raid.disk[0].channel: 0
0x237 raid.disk[0].device: 0
0x238 raid.disk[0].magic_0: 0x8ca00626
0x23c raid.disk[0].disk_number: 16777231
0x240 raid.disk[1].unknown_0: 0x307
0x242 raid.disk[1].channel: 1
0x243 raid.disk[1].device: 0
0x244 raid.disk[1].magic_0: 0x8ca00626
0x248 raid.disk[1].disk_number...

Read more...

Revision history for this message
Phillip Susi (psusi) wrote :

Danny, how did you discover this upper 16 bits of size? Was it from experimentation or from some documentation? I ask because I have been working with a sample of pdc metadata from another bug and found that this patch identified a value of total_secs_h of 256 when it should be 0. This makes me think that the upper 8 bits are used for something else and the actual total_secs_h is only 8 bits, not 16.

Revision history for this message
Danny Wood (danwood76) wrote :

The documentation is unavailable so it was found through experimentation, the only bits in the metadata that were free and happened to be the correct values were these ones.

Thats why I made my comments about testing in post 87.

The upper 8 could be used for anything, I guess they just happened to be 0 in this example.

I guess you need to update the upstream patch again!
Sorry about that.

Revision history for this message
Phillip Susi (psusi) wrote :

No problem, I just wanted to make sure you didn't have reason to think the field shouldn't be reduced to 8 bits.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package dmraid - 1.0.0.rc16-4.1ubuntu2

---------------
dmraid (1.0.0.rc16-4.1ubuntu2) natty; urgency=low

  * Added 21_fix_testing.patch: Testing with dm devices was failing
    on Ubuntu because /dev/dm-X is the actual device node, but the
    code wanted it to be a symlink. Fixed dm_test_device() to test
    that the file ( or node it points to ) is a block device, which
    seems a much more appropriate test.
  * Added 22_add_pdc_64bit_addressing.patch: PDC metadata locations for
    high bytes of raid set sector count (LP: #599255)
    [ Danny Wood <email address hidden> ]
  * Added 23_pdc_dump_extended_metadata.patch: PDC supports up to 4
    sets of metadata to describe different arrays. Only the first
    set was being dumped with dmraid -rD or -n. Also fixes the
    .offset file, which was always 0 instead of the actual offset.
  * Added 24_drop_p_for_partition_conditional.patch:
    dmraid was changed at one point to insert a 'p' between
    the base device name and the partition number. For
    some time debian and ubuntu reversed this change. This
    patch modifies the behavior to add the 'p' iff the last
    character of the base name is a digit. This makes
    dmraid comply with the behavior used by kpartx and
    "by linux since the dawn of time".
  * Fix once again the jmicron naming bug, upstream fix does not work
    (LP: #576289)
    [ Danny Wood <email address hidden> ]
  * Breaks libparted0debian1 (<< 2.3-5ubuntu4)
 -- Phillip Susi <email address hidden> Fri, 04 Mar 2011 13:42:01 -0500

Changed in dmraid (Ubuntu):
status: Triaged → Fix Released
Revision history for this message
mercury80 (pviken) wrote :

I am running dmraid - 1.0.0.rc16-4.1ubuntu3.
When i try to format a striped 2x2TB raid, this is the result:

Error creating partition table: helper exited with exit code 1: In part_create_partition_table: device_file=/dev/dm-0, scheme=0
got it
got disk
committed to disk
BLKRRPART ioctl failed for /dev/dm-0: Invalid argument

Revision history for this message
Phillip Susi (psusi) wrote : Re: [Bug 599255] Re: dmraid fails to read promise RAID sector count larger than 32-bits

On 6/22/2011 5:08 AM, mercury80 wrote:
> Error creating partition table: helper exited with exit code 1: In part_create_partition_table: device_file=/dev/dm-0, scheme=0
> got it
> got disk
> committed to disk
> BLKRRPART ioctl failed for /dev/dm-0: Invalid argument

What program is this? It appears to be buggy so you should file a bug
against that package.

This error is unrelated to this bug report though. Also your array is
so large that it must use GPT instead of the MSDOS partition table, and
that is currently unsupported by dmraid.

Revision history for this message
mercury80 (pviken) wrote :

@ Phillip Susi
> What program is this? It appears to be buggy so you should file a bug
> against that package.

Ok. Will do. Using the Disk Utility that comes with Ubuntu 10.10+

> This error is unrelated to this bug report though. Also your array is
> so large that it must use GPT instead of the MSDOS partition table, and
> that is currently unsupported by dmraid.

Ok. This happend when i tried to format the disk to GPT. More info:
http://ubuntuforums.org/showpost.php?p=10967372&postcount=4

Revision history for this message
Kim (chenxin20101019) wrote :

Hi Phillip Susi
I have the same problem like this.
————————————————————————
kim@kim-desktop:~$ sudo dmraid -s
[sudo] password for kim:
*** Active Set
name : pdc_bbjaiahci
size : 3518828800
stride : 128
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
kim@kim-desktop:~$ dmraid -V
dmraid version: 1.0.0.rc16 (2009.09.16) shared
dmraid library version: 1.0.0.rc16 (2009.09.16)
device-mapper version: unknown

By the way, I think the RAID is Jmicron's ,NOT promise's RAID
a Windows's software think so..[My motherboard is GIGABYTE 880GA-UD3H]

Sorry, My English doesn't very well. :-)
Thank you! I need this software very very much..
I hope you can fix the bug! :-)

Revision history for this message
HenryC (henryc) wrote :

Can this bug be reopened, since the original fix was reverted, and the problem still exists? I have a 8TB pdc raid set that I have run into this issue with, and have been trying to fix it... I'd be happy to help, if anyone more familiar with dmraid wants to try fix this aswell.

I have attached a metadata dump of my current array (RAID0, 4 disks, 2TB each)

Revision history for this message
Phillip Susi (psusi) wrote :

Can you post the output of fdisk -lu or otherwise list the exact sector count of the drives?

Changed in dmraid (Ubuntu):
status: Fix Released → Triaged
Revision history for this message
Phillip Susi (psusi) wrote :

Also could you boot into windows and find out what it thinks the exact sector count of the array is?

Revision history for this message
HenryC (henryc) wrote :

# fdisk -lu (all 4 disks have exactly the same sizes)
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

I could not find a way to get the sector count for the array in windows...

Additionally, if it helps, RAID option ROM reports the CHS as 65535/255/63, but I doubt the values are correct.
Capacity for the whole array should be 7 999 999 967 232 bytes, and 2 000 331 825 152 bytes for each LD.

Revision history for this message
Phillip Susi (psusi) wrote :

Would it be possible for you to rebuild the array using only 3 drives, and capture that metadata?

Looking over that first set of metadata, I am starting to think that the higher order bits simply are not stored at all, and the total size simply must be computed using the size of each disk and the count of disks. The problem with this is that not all of each disk is fully used and I have not been able to figure out how much it rounds down by.

Revision history for this message
HenryC (henryc) wrote :

I agree, the high bits are either not stored at all, or they are stored in the area dmraid reads as filler2, which seems unlikely (I assume the high byte for my array should be 0x03). The problem with calculating the size by using the sector counts of each disk is, that the resulting size seems to be ~65k sectors too large... I tried just using the high byte I got by multiplying the sector count of a single disk, and adding that to the total sector count in the metadata, but that didn't seem to work either.

I will see if I can rebuild the array, but I'll have to figure out a way to back up the data on the array first...

Revision history for this message
Danny Wood (danwood76) wrote :

Hi Phillip and Henry,

I have taken a quick look at this and compared the latest metadata with Nishes from before and it looks like the offset for the high bits might actually be at 0x2E8 (in filler 2).

Basically we have 3 metadata sets in this bug report.

Nishes exist in metadata.tar.gz, the first set (2TB set) is sda1.dat, second (2.5TB set) is sda2.dat, and the latest from Henry.

Comparing all three:
The high bits for the 2TB array will be 0x0000
The high bits for the 2.5TB array should be 0x0001
The high bits for the 8 TB array should be 0x0003

Compare each metadata and 0x2E8 is correct in each instance.

Obviously this is within filler2 but occurs in the section that appears to be all 0's.

Are there any other bugs with metadata that we can compare this value from?
I remember the original patch broke a lot of arrays, were there dumps in those bug reports?

Best regards,
Danny

Revision history for this message
Danny Wood (danwood76) wrote :

Metadata from here also seems to agree:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/770600/+attachment/2094374/+files/dmraid.pdc.tar.gz

His has high bits of 0x0000 at 0x2E8 for a 2TB array

Revision history for this message
Danny Wood (danwood76) wrote :

Henry if you manage to backup your data you could confirm this if you create several different sized arrays.

2TB will create 0x0000 at 0x2E8
3TB will create 0x0001 at 0x2E8
6TB will create 0x0002 at 0x2E8
8TB will create 0x0003 at 0x2E8

After each array creation dump the meta data and post please for me to analyse also.

Revision history for this message
HenryC (henryc) wrote :

I created the arrays you asked for, and it seems 0x2E8 is indeed the correct location. The values I got are 0x00, 0x01, 0x02 and 0x03 as assumed.

Revision history for this message
Danny Wood (danwood76) wrote :

Excellent, thank you for doing that.

I will cook up a patch later, similar to my old one, that uses this new offset.

Revision history for this message
Phillip Susi (psusi) wrote :

Good eye! I was comparing those two sets of metadata trying to find a location that appeared to have the correct value in both cases but missed that.

Revision history for this message
Danny Wood (danwood76) wrote :

Hi Phillip,

Attached is a patch that should fix the issue based on the ubuntu 12.10 version of dmraid.
It compiles but is untested, are you able to test this for me?

Do you need me to create a debdiff or is it easy for you to do?
I haven't had my build environment set up at home since I first attempted to fix this bug (~2 years ago?).

Best regards,
Danny

Revision history for this message
HenryC (henryc) wrote :

I have been doing some testing with Danny's patch, and it seems something is still missing... The patch works fine, but the sector counts in the metadata don't quite add up, and I still cannot get the array to work.

I did some calculations based on the disk size, and it seems with the 8TB array the sector count in the metadata is 1024 sectors less than what it should be. The disk size without a partition table is 7629395MB, which would be 15625000960 sectors, but according to the metadata the sector count is 15624999936...

I feel like there is some offset or rounding missing, but it seems odd that it would only be an issue with larger arrays.

Revision history for this message
Phillip Susi (psusi) wrote :

How did you determine the disk size?

Revision history for this message
HenryC (henryc) wrote :

Sorry about the sector counts, I did the calculations again, and it seems that the sector count in the metadata is probably correct. I got the disk size in megabytes from windows disk manager, and calculated the sector count from that, but since the disk size is rounded to megabytes and the sector size is 512B, the sector count can be off by about one megabyte, which is 2048 sectors.

Now I feel like I am doing something wrong when I try to read the disks, since the size seems to be correct, but I cannot access any partition on the array. I tried parted, but it only says "unrecognised disk label", and I tried manually running kpartx, but it doesn't detect any partitions.

Revision history for this message
Phillip Susi (psusi) wrote :

What does dmsetup table show?

Revision history for this message
HenryC (henryc) wrote :

# dmsetup table
pdc_bdfcfaebcj: 0 15624999936 striped 4 256 8:0 0 8:16 0 8:32 0 8:80 0

Revision history for this message
Phillip Susi (psusi) wrote :

It appears that on smaller arrays, the pdc metadata is in a sector near the end of the drive, but on the larger ones it is at the beginning. Since the metadata is at the start of the drive, that should require adding some offset before the first raid stripe, which dmraid does not seem to have done.

Revision history for this message
Danny Wood (danwood76) wrote :

Looking back I think this was the issue Nishihama Kenkowo had with the original patch.

Sorry if you are already working on this offset issue but I thought I would add some thoughts.

Looking through the dmraid code I cannot see where it would add an offset.
Would the offset simply be the metadata size of 4 sectors or 2kB?

Is it possible to simulate this offset with kpartx? I seem to remember an offset option when mounting disk images.

Revision history for this message
HenryC (henryc) wrote :

I tried to look into calculating the offset, but if I understand the metadata detection code correctly, it seems that is not the problem I am having. The metadata for my array is found within the first loop in pdc_read_metadata, as an offset of end_sectors, so I assume it is at the end of the disk.

Revision history for this message
Danny Wood (danwood76) wrote :

If you have created a correct GPT then kpartx should find them.

Does dmraid detect the correct RAID layout?
Ie stride size, count, etc.

You need to investigate the partitioning on the disk, you need to make sure your data is backup up as you are likely to loose partitioning here.

Dump the current GPT to a file (First 17kB of array in total I think) and then recreate the GPT using gparted or gdisk creating the same partition layout and dump it again.

Take a look at the files and try to analyse the GPT, also post both files here.

Revision history for this message
Phillip Susi (psusi) wrote :

According to the .offset files in your metadata it was found at offset 0, or the start of the disk. Are you sure this is not where it is at?

Revision history for this message
HenryC (henryc) wrote :

Sorry for the late response, I haven't had access to my computer over the weekend.

I dumped the first 17kB of the array with the formatting from windows, and after formatting it with gparted. It would seem the partition table from windows is offset further into the disk than the one created by gparted. I am guessing the partition tables start at 0x200 for gparted, 0x800 for the table created in windows (I am not familiar with the GPT format). Both dumps are attached.

The metadata is on sectors 3907029105 to 3907029109.

Revision history for this message
Danny Wood (danwood76) wrote :

Does the gparted version work in Ubuntu?
It doesn't appear to have a protective MBR as in the GPT spec but this may not be an issue.

It appears that windows believes the LBA of the drive is 2048 (0x800) bytes where as ubuntu thinks it is 512 bytes (0x200) as the GPT header is located at LBA1.

I am unsure where the LBA size comes from.
Phillip is it read from the metadata?

Revision history for this message
Phillip Susi (psusi) wrote :

That is really strange. I did not think Windows could handle non 512 byte sector devices. There does not appear to be any known field in the pdc header that specifies the sector size. It could be that it just uses 2k for anything over 2TB. Actually, I wonder if it uses whatever sector size would be required for MBR to address the whole thing? So maybe it goes to 1k for 2-4 TB, then 2k for 4-8 TB?

Henry, can you dump the first few sectors of the individual disks?

Revision history for this message
HenryC (henryc) wrote :

I dumped the first 6 sectors of each individual disk, both with windows formatting and dmraid formatting. I can't make much out of the data, but hopefully it's helpful...

Revision history for this message
Phillip Susi (psusi) wrote :

That confirms that the metadata is not at the start of the disk. It looks like the problem is just the sector size. Could you try recreating the array such that the total size is around 3 TB and see if that gives a sector size of 1k?

Revision history for this message
HenryC (henryc) wrote :

I created a 3TB array, and it does indeed use a sector size of 1024 bytes. I also tried a 4TB and a 5TB array to verify your theory, and it seems to be correct. The 4TB array is still using a sector size of 1024 bytes, while the 5TB array used 2048.

Revision history for this message
Danny Wood (danwood76) wrote :

That is interesting.
I have been doing various searches online and can't find any other references to windows doing this.

Are you using 64-bit windows?

I am just setting up a virtual machine with a rather large virtual drive to see if I can replicate.

Revision history for this message
HenryC (henryc) wrote :

64-bit windows 7, yes.

Revision history for this message
Danny Wood (danwood76) wrote :

Ok,

After some testing I think I can confirm that the sector size is coming from the pdc driver and not windows.
All the drives I created of various sizes with windows and gparted show up in both operating systems and always have a sector size of 512.

So we need to change the sector size advertised by dmraid to accommodate this, what is odd is that the metadata sector count is still 512 bytes / sector just to confuse things.

Revision history for this message
Danny Wood (danwood76) wrote :

I can't see where dmraid advertises its sector size!
Phillip do you have any idea?

I did find a thread where someone described the same symptoms of large arrays on the promise raid controller and the sector counts:
http://ubuntuforums.org/showthread.php?t=1768724
(Phillip you commented on this thread and in the end they created 2 x 2TB arrays instead of 1 x 4TB)

Revision history for this message
Phillip Susi (psusi) wrote :

You contradicted yourself there Danny. If they always have a sector size of 512 bytes then we wouldn't have anything to fix. You must have meant that the larger arrays have larger sector size.

And yea, I can't see where you set the sector size, so I posted a question to the ataraid mailing list yesterday about it.

Revision history for this message
Danny Wood (danwood76) wrote :

Sorry Phillip if I wasn't clear, what I meant to say was that with virtual drives in both virtualbox and qemu windows 7 created a GPT with a 512 bytes per sector size no matter the drive size.

So I concluded that it must be the promise raid driver itself that creates the larger sector size which windows uses as opposed windows creating this itself. So whatever changes are made to dmraid would have to be specific to the pdc driver.

However I do not have a promise raid chip set to be able to test larger arrays in real life but the evidence from Henry and the other thread I found indicate that this is the promise raid drivers behaviour.

Revision history for this message
Phillip Susi (psusi) wrote :

Oh yes, of course... I thought it was a given that this is pdc specific behavior.

Revision history for this message
Greg Turner (gmt) wrote :

This bug is ancient, and perhaps nobody cares anymore, but I've figured out a bit more about where we are left with respect to this.

dmraid userland always assumes that the sector size is 512. It is a hard-coded constant value.

Meanwhile, in kernel land, dm devices always map their sector sizes, both logical and physical, to the logical sector size of their underlying devices.

Perhaps in order to deal with this discrepancy, there is code in dmraid userland to ignore any drive whose sector size is not 512. That code doesn't get triggered, as in this case the problem is that Promise wants to virtualize the sector size, as they do in their scsi miniport driver for windows.

Check out this:

https://www2.ati.com/relnotes/AMD_RAIDXpert_User_v2.1.pdf, (p. 107)

If that's right, we might be able to work around this whole mess, having our dual-boot cake and eating it, too, by creating multiple volumes of size less than 2TB, keeping MBR on them (as linux does not grok GPT-partitioned dynamic disks) and using LDM to piece them together.

For my part, looking at the state the dmraid code and Promise metadata are in, I'm disinclined to rely on it at all; I'm just going to give up on fully functional dual-boot, use md-raid, and an emulated NAS if I need access to my other-system data from Windows.

That stated, I guess, to solve the problem fundamentally, in linux, we'd either need to extend dmraid to support emulated, metadata-based sector sizes, both in the kernel and the userland code-bases, or to implement some hack to change the logical geometry of the physical devices before setting up these arrays (but see https://bugzilla.redhat.com/show_bug.cgi?id=624335 which suggests this might not work, anyhow).

It's hard to see anyone putting that kind of effort into the increasingly marginalized dm-raid framework so I wouldn't hold my breath...

Revision history for this message
Phillip Susi (psusi) wrote :

Linux understands GPT just fine, but ldm *is* "dynamic disks", so if you tried to use that to glue them back together, then linux would not understand it.

Revision history for this message
Vertago1 (vertago1) wrote :

I believe I am affected by this bug, but I wanted to check to see if I am having the same issue.

I have an amd 990X chipset which uses SB950, according to http://www.redhat.com/archives/ataraid-list/2012-March/msg00001.html it is probably a Promise controller.

I have two 2TB disks in RAID0 which windows was able to see and partition with GPT.
sudo dmraid -r:
/dev/sdb: pdc, "pdc_ejdejgej", stripe, ok, 1758766336 sectors, data@ 0
/dev/sda: pdc, "pdc_ejdejgej", stripe, ok, 1758766336 sectors, data@ 0

Ubuntu doesn't see the correct volume size.
sudo /dev/mapper/gdisk pdc_ejdejgej:
GPT fdisk (gdisk) version 0.8.8

Warning! Disk size is smaller than the main header indicates! Loading
secondary header from the last sector of the disk! You should use 'v' to
verify disk integrity, and perhaps options on the experts' menu to repair
the disk.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: damaged

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
***************************************************************************

Revision history for this message
Phillip Susi (psusi) wrote :

If you have a pdc volume that is over 2TiB, then yes.

Revision history for this message
Vertago1 (vertago1) wrote :

I have setup a build environment for dmraid and will start looking through it to get an idea of whether or not I could contribute a patch. Any advice on where to start or on what documentation would be useful would be appreciated.

Revision history for this message
Phillip Susi (psusi) wrote :

I'm not sure why you can't build it, but the part of the source of most interest is pdc.c. The problem is that promise has never provided specifications for the format, so it was reverse engineered. The other problem is that it looks like the Windows driver pretends the disk has a larger sector size when you go over 2 TiB, and the kernel device-mapper driver does not have a way to change the sector size, so the kernel would need patched.

Your best bet is to simply avoid using volumes over 2 TiB.

Revision history for this message
Vertago1 (vertago1) wrote :

Well I figure it might be useful to start collecting samples of metadata from different arrays using the pdc part of dmraid. I have two machines with different chipsets one has a 1.7TB striped volume the other a 3.7TB striped volume.
I created these dumps by running:

sudo dmraid -rD /dev/sda
cd dmraid.pdc
sudo cat sda.dat | hexdump > /tmp/result.hex

Revision history for this message
Vertago1 (vertago1) wrote :
Revision history for this message
Vertago1 (vertago1) wrote :

I was able to build the dmraid packages with Danny's patch: https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/599255/+attachment/3428773/+files/26_pdc-large-array-support.patch

After installing them I am able to see my ntfs volumes. I mounted the largest read only and I was able to read the files ok. The largest partition is under 2TB though.

Gparted gives an error saying invalid argument during seek on /dev/sda. If I tell it cancel it seems to work ok after that.

Is there a problem with this patch that prevents us from submitting it to upstream?

I am working on getting a grub2 entry to work for chainloading windows.

Revision history for this message
Danny Wood (danwood76) wrote :

Hi Vertago1,

Yes the patch appeared to work, we merged it to the Ubuntu dev packages and it worked for some people.
The sector size was still an issue in some setups as windows appeared to use both 512 and 1024 byte sectors sizes.

However once we hit the release we quite a few people then reporting non functioning RAID setups as the additional bytes I chose were obviously used for something else.

Upstream dmraid doesn't accept patches. It seems that most people who start off booting using dmraid eventually migrate to a fully Linux Mdadm setup. Add in to that Mdadm being more feature complete and also supporting intel matrix raid metadata and dmraid is not really required any more except for a few odd chipsets.

Revision history for this message
David Burrows (snadge) wrote :

It's been 2 years, 8 months, 20 days since Danny Wood last posted in this thread. Just quickly, really appreciate your efforts attempting to fix this problem, without even having the hardware. That's dedicated.

I've just set up a 2x4TB RAID1 mirror in Windows, which of course leads me to this thread. Good news, with a patch to Danny's patch, my raid mirror detects and appears to be working. My pre-existing 1TB raid1, continues to function as it did before.

I will re-upload the patch (with a different patch index number to avoid confusion with the original), which includes my 1 line fix, that allows the 4TB mirror to detect, activate and work as expected.

- unsigned pdc_sectors_max = di->sectors - div_up(sizeof(*ret), 512);
+ uint64_t pdc_sectors_max = di->sectors - div_up(sizeof(*ret), 512);

pdc_sectors_max was 32bit, and overflowing, which caused the pdc_read_metadata function to fail to find the metadata offset from the end of the disk.

I thought I might also use the opportunity to clear up some confusion with regards to some people having difficulty finding a partition table or failing to mount their existing raid setups.

AMD RAIDXpert (pdc format) allows you to choose a logical sector size. 512, 1024, 2048 or 4096 bytes. In Windows, this configures the drives logical sector size to match what you chose at the raids creation time. This is presumably contained within the metadata.

Page 106 of the user manual alludes to why you might want to choose a non default sector size, as it affects the maximum LD migration size. Linked for convenience:
https://www2.ati.com/relnotes/AMD_RAIDXpert_User_v2.1.pdf#G8.1017955

dmraid seems to only support 512 byte logical sectors. If we could read the logical sector size from the metadata, couldn't we then just set the logical sector size at the device mapper node's creation time? This way the partition table should line up when you use f(g)disk/gparted etc.

In the meantime, just make sure you choose the default 512 byte logical sectors, if you want to share RAID arrays between Windows and Linux.

Revision history for this message
Phillip Susi (psusi) wrote :

You should bear in mind that fakeraid puts your data at risk. In the event of a crash or power failure, some data can be written to one disk and not the other. When the system comes back up, a proper raid system will copy everything from the primary to the secondary disk, or at least the parts of the disk ( if you have a write intent bitmap ) that were dirty at the time of the crash, and only allow reads from the primary disk until that is complete. Fake raid does neither of these, so which disk services a read request is a toss up so the system might read the old data on one disk or the new data on the other disk, and this can flip flop back and forth on a sector by sector basis, causing all sorts of filesystem corruption.

Revision history for this message
TJ (tj) wrote :

Due to this issue being brought to IRC #ubuntu I did some background research to try to confirm Danny's theory about sector-size.

So far the best resource I've found in the Promise Knowledge base (kb.promise.com) is:

https://kb.promise.com/thread/how-do-i-create-an-array-larger-than-2tb-for-windows-xp-or-windows-2000-32-bit-operating-systems/

This page contains the following table:

For this logical drive size Select this sector size

Up to 16 TB 4096 bytes (4 KB)
Up to 8 TB 2048 bytes (2 KB)
Up to 4 TB 1024 bytes (1 KB)
Up to 2 TB 512 bytes (512 B)

The page intro says:

"This application note deals with a specific application for VTrak M-Class and E-Class in a Windows 2000/WinXP 32-bit OS environment."

From fragments in other Promise KB articles I do think this is the formula the Promise Fastrak Windows drivers follow so might be a basis for a permanent and reliable fix.

Revision history for this message
TJ (tj) wrote :

Ghah! After pressing "Post Comment" also found this firmer confirmation of (part of) the algorithm:

"Solution: From 0-2 TB the sector size is 512k. From 2-4 TB the sector size is 1028k. Then from 4 + it changes the sector size to 2048k thats why the information is displayed in as unallocated. Following this parameters when expanding should make expanding the array work."

https://kb.promise.com/thread/why-windows-does-not-see-parition-after-expanding-array/

Revision history for this message
Martina N. (tyglik78) wrote :

Can I help with solving this?
I have this problem now - I would like to create fake raid10 4TB (dual boot).

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.