Intel RAID controller doesn't work

Bug #57860 reported by Kibbled_bits
8
Affects Status Importance Assigned to Milestone
Ubuntu
New
Undecided
Unassigned

Bug Description

Intel's ICH5R onboard RAID controller doesn't work with Ubuntu 6.06 out of the box. The motherboard is an ASUS P4C800-E Deluxe. The onboard promise controller also does not appear to work.

This is apparantly due to both controllers being software RAID and requiring software in order to perform the mirroring/stripping. A software package (in the Universe I believe) called DMRaid apparantly supports these software RAID controllers.

For instance when I do install DMRaid, while booted on the LiveCD, and I configure it, it can see the stripped array, but Gnome Partition Editor doesn't recognize it other than the size of the drive and the type, therefore I'm unable to resize this partition.

I looked at the boot options available and did not see any that were applicable. Basically what I'm trying to do is boot into Ubuntu and use Gnome Partition Editor to resize my NTFS partition in order to dual boot both OS'es off of the mirror.

I recommend integrating this into the distribution or find a way to work this into the kernel, many power users are buying these motherboards now-a-days that give them simple RAID capabilities and their support in Linux would be imperitive. I'm willing to help test or work on any of this if necessary as I have a setup that appears to be a problem.

Thanks,
Scott

Revision history for this message
Kibbled_bits (scott-w-white) wrote :

Followup:

Presently there is no out of the box support for BIOS enabled software RAID such as the many that come with NVidia, Intel or Highpoint Chipsets. This can be a turn-off for power users wishing to dual-boot WindowsXP and Ubuntu.

Dual booting is easily performed, and the team has done an incredible job on a variety of systems, still I feel this area is severely lacking.

In order to even get Ubuntu to see the drive you have to install DMRaid, which is in the universe of repositories, once you enable it.

Subsequently with installing ntfs-tools I was able to successfully resize the partition using a combination of ntfs-resize & fdisk. Now Ubuntu installer is able to see the free space, but the installation fails, though it successfully creates the partitions it fails 15% way through "Installing System" with "Failed to create a file system":"Detecting Filesystems".

Revision history for this message
Tormod Volden (tormodvolden) wrote :
Revision history for this message
Kibbled_bits (scott-w-white) wrote : Re: [Bug 57860] Re: Intel RAID controller doesn't work

Do you think I would have any better luck with Edgy? Do we know if DMRaid
is included or if there is better compatibility with Edgy?

I'm willing to help test/confirm any hacks or packages for this as I think
it's important to get working with Ubuntu.

Thanks,
Scott

On 9/4/06, Tormod Volden <email address hidden> wrote:
>
> See also bug #22107. And https://help.ubuntu.com/community/FakeRaidHowto
>
> --
> Intel RAID controller doesn't work
> https://launchpad.net/bugs/57860
>

Revision history for this message
Phillip Susi (psusi) wrote :

With edgy you should be able to boot from the livecd and install the dmraid package to gain access to the raid volume, then install ubuntu to it. Then chroot into the raid volume and install the dmraid package in there before rebooting.

Note that in edgy the lvm boot script will hang up for 3 minutes during boot if you are booting from the dmraid. To fix this, delete /usr/share/initramfs-tools/scripts/local-top/lvm and run update-initramfs.

Revision history for this message
Matt Zimmerman (mdz) wrote :

On Mon, Oct 30, 2006 at 05:12:12PM -0000, Phillip Susi wrote:
> *** This bug is a duplicate of bug 22107 ***
>
> With edgy you should be able to boot from the livecd and install the
> dmraid package to gain access to the raid volume, then install ubuntu to
> it. Then chroot into the raid volume and install the dmraid package in
> there before rebooting.
>
> Note that in edgy the lvm boot script will hang up for 3 minutes during
> boot if you are booting from the dmraid.

If this is true, then it is probably related to your hardware, and ought to
be reported as a bug.

> To fix this, delete /usr/share
> /initramfs-tools/scripts/local-top/lvm and run update-initramfs.

This will not be satisfactory, as the script will be restored on upgrade and
the initramfs regenerated. Please help us fix the root cause of the bug
instead.

--
 - mdz

Revision history for this message
Gaurav Mishra (gauravtechie) wrote :

On 10/31/06, Matt Zimmerman <email address hidden> wrote:

>
> This will not be satisfactory, as the script will be restored on upgrade and
> the initramfs regenerated. Please help us fix the root cause of the bug
> instead.
>
This Problem resides in all new Via VT8237 chipsets sold in India,
Actually there is no real raid controller , It just software
emulation. single ATA hard disk is not working too and there is no way
you can switch off this RAID option.

> --
> - mdz
>
> --
> Intel RAID controller doesn't work
> https://launchpad.net/bugs/57860
>

--

Linux User #348873
ILUGD Commitee Member, GZLUG , DGLUG Moderator
B.Tech 4th Year Computer Science , RKGIT , Ghaziabad
http://rockybhai.blogspot.com
"When i can run , i will run , When i can walk , i will walk, When i can
crawl , i will crawl. But i will not stop moving forward"

Revision history for this message
Phillip Susi (psusi) wrote :

Matt Zimmerman wrote:
>
> If this is true, then it is probably related to your hardware, and ought to
> be reported as a bug.
>

Negative, it is a defect in the lvm script where it spins for up to 3
minutes waiting for the vg to appear, which never does since we are
using dmraid, not lvm. The lvm script does this because the boot path
starts with /dev/mapper, so it assumes it is referring to an LVM volume.
I have discussed it on IRC with Fabio ( who made the change to the lvm
script since dapper ) and we could not see any other way to resolve this
at this time.

>
> This will not be satisfactory, as the script will be restored on upgrade and
> the initramfs regenerated. Please help us fix the root cause of the bug
> instead.
>

Since edgy has already been released it can not be fixed until the next
release now, and during this development cycle all of the
lvm/dmraid/mdraid/evms boot scripts will be removed and replaced with
udev callouts. This should fix the problem.

Revision history for this message
Kibbled_bits (scott-w-white) wrote :

Put me down as a tester for this. I already have the rig to test it on. Also I think the LiveCD should support DMRaid more seemlessly like Fedora does.

Thanks,
Scott

Revision history for this message
Matt Zimmerman (mdz) wrote :

On Tue, Oct 31, 2006 at 04:08:38PM -0000, Phillip Susi wrote:
> *** This bug is a duplicate of bug 22107 ***
>
> Matt Zimmerman wrote:
> >
> > If this is true, then it is probably related to your hardware, and ought to
> > be reported as a bug.
>
> Negative, it is a defect in the lvm script where it spins for up to 3
> minutes waiting for the vg to appear, which never does since we are
> using dmraid, not lvm. The lvm script does this because the boot path
> starts with /dev/mapper, so it assumes it is referring to an LVM volume.
> I have discussed it on IRC with Fabio ( who made the change to the lvm
> script since dapper ) and we could not see any other way to resolve this
> at this time.

The lvm script shouldn't do any waiting; that step should be in common code
for all root device types. That way, all activation takes place, then a
single wait for the device to appear.

And as I said above, this should be reported as a bug. Positive.

> > This will not be satisfactory, as the script will be restored on upgrade and
> > the initramfs regenerated. Please help us fix the root cause of the bug
> > instead.
>
> Since edgy has already been released it can not be fixed until the next
> release now, and during this development cycle all of the
> lvm/dmraid/mdraid/evms boot scripts will be removed and replaced with
> udev callouts. This should fix the problem.

We have a procedure for post-release updates.

--
 - mdz

Revision history for this message
Phillip Susi (psusi) wrote :

Matt Zimmerman wrote:
> The lvm script shouldn't do any waiting; that step should be in common code
> for all root device types. That way, all activation takes place, then a
> single wait for the device to appear.
>

It did not wait in dapper, but it does in edgy. Fabio said this was
because sometimes the underlying hardware was not detected at the time
the script was run, so it keeps trying to activate the volume until it
succeeds ( or 3 minutes elapses ) hoping that the devices will show up
and it will be able to activate the volume.

> And as I said above, this should be reported as a bug. Positive.
>

What exactly is the malfunction, and what should it be doing instead?
And would it be fixed in edgy? Based on my talk with Fabio on IRC, the
wait is not a malfunction but is required in order to activate the LVM
volume on slowly detected hardware. That the lvm script assumes you are
dealing with an lvm volume if the boot path starts with /dev/mapper
might be a defect, but Fabio and I could not see a workaround.

> We have a procedure for post-release updates.

It did not seem to me that this would classify as a post-release update
candidate since it is not security related, and only causes an annoying
delay in conjunction with a rare use case of a package from Universe.

If you feel this can and should be fixed in edgy, please go ahead and
file the bug against lvm, otherwise it should be resolved during the
udev rework in this development cycle.

Revision history for this message
Kibbled_bits (scott-w-white) wrote :

FYI: I tried

Installing DMRaid on LiveCD. Installation aborts giving error that it could not partition hard drive and then aborts.

I've confirmed that DMRaid loaded properly and I started it with

sudo dmraid -s

Thereafter I saw the partitions available with FDisk, Gparted & Installer (manual edit) both did not see the devices mapped.

Installer did see the mirrored HDD however and upon telling it to install in available free space (which there was available) it subsequently failed.

According to fdisk it did create the ext3 & swap partitions.

Any IDEAS?!

Revision history for this message
Phillip Susi (psusi) wrote :

The installer and (g/qt)parted will not be able to repartition the
drive, or at least, will not be able to refresh the partition table
after modifying it. Use fdisk to partition the disk, then re-run dmraid
to detect the changes. Then tell the installer to use the existing
partitions.

Kibbled_bits wrote:
> *** This bug is a duplicate of bug 22107 ***
>
> FYI: I tried
>
> Installing DMRaid on LiveCD. Installation aborts giving error that it
> could not partition hard drive and then aborts.
>
> I've confirmed that DMRaid loaded properly and I started it with
>
> sudo dmraid -s
>
> Thereafter I saw the partitions available with FDisk, Gparted &
> Installer (manual edit) both did not see the devices mapped.
>
> Installer did see the mirrored HDD however and upon telling it to
> install in available free space (which there was available) it
> subsequently failed.
>
> According to fdisk it did create the ext3 & swap partitions.
>
> Any IDEAS?!
>

Revision history for this message
Kibbled_bits (scott-w-white) wrote :

Okay this fails too. The installer forces your to format the HDD even onces
it's partitioned properly. At 5% (Creating ext3 file system for / in
partition #1 of LVW VG..."

Then it asks me "Do you want to resume partitioning? If I click Continue or
Go Back nothing happens.
This happens even if I try to manually format it using mkfs.ext3

??
Thanks,
Scott

On 10/31/06, Phillip Susi <email address hidden> wrote:
>
> *** This bug is a duplicate of bug 22107 ***
>
> The installer and (g/qt)parted will not be able to repartition the
> drive, or at least, will not be able to refresh the partition table
> after modifying it. Use fdisk to partition the disk, then re-run dmraid
> to detect the changes. Then tell the installer to use the existing
> partitions.
>
> Kibbled_bits wrote:
> > *** This bug is a duplicate of bug 22107 ***
> >
> > FYI: I tried
> >
> > Installing DMRaid on LiveCD. Installation aborts giving error that it
> > could not partition hard drive and then aborts.
> >
> > I've confirmed that DMRaid loaded properly and I started it with
> >
> > sudo dmraid -s
> >
> > Thereafter I saw the partitions available with FDisk, Gparted &
> > Installer (manual edit) both did not see the devices mapped.
> >
> > Installer did see the mirrored HDD however and upon telling it to
> > install in available free space (which there was available) it
> > subsequently failed.
> >
> > According to fdisk it did create the ext3 & swap partitions.
> >
> > Any IDEAS?!
> >
>
> --
> Intel RAID controller doesn't work
> https://launchpad.net/bugs/57860
>

Revision history for this message
Tormod Volden (tormodvolden) wrote :

Heh, the unusual situation of a developer who wants to get something fixed and a user who wants to defer it ;) Anyway, I for one would love to see a backport or edgy-update.

Bug #69217 has a little fix for lvm which "checks that the device is actually a lvm device". Maybe this would do the trick here also?

Revision history for this message
Phillip Susi (psusi) wrote :

Kibbled_bits wrote:
>
> Okay this fails too. The installer forces your to format the HDD even onces
> it's partitioned properly. At 5% (Creating ext3 file system for / in
> partition #1 of LVW VG..."
>
> Then it asks me "Do you want to resume partitioning? If I click Continue or
> Go Back nothing happens.
> This happens even if I try to manually format it using mkfs.ext3

Could you be more specific? How do you have it partitioned? Please
show the output of fdisk -l on the raid device, as well as the mkfs.ext3
command line you used. That should definitely work properly.

I believe the installer should work fine if only used to format the
partition, not change the partition table, but if you can not manually
format the partition either, that would indicate a more serious
underlying problem.

Also you did not forget to run dmraid again after changing the partition
table did you? If you change the partition table, the effect does not
happen until you run dmraid again, and so you can not try to access
partitions until that time.

Revision history for this message
Phillip Susi (psusi) wrote :

Tormod Volden wrote:
> Heh, the unusual situation of a developer who wants to get something
> fixed and a user who wants to defer it ;) Anyway, I for one would love
> to see a backport or edgy-update.
>

I'm acting as a developer on this btw, not just a user ;)

> Bug #69217 has a little fix for lvm which "checks that the device is
> actually a lvm device". Maybe this would do the trick here also?

No, that isn't really a fix ( see my comments on that bug ).

I suppose it would be possible to patch dmraid to NOT create the dev
nodes in /dev/mapper but somewhere else instead... then the lvm script
would not think the boot path refers to an lvm volume.

I'll see if I can't hack that out tonight and attach the new package to
the parent bug. I suppose it could at least get cleared for a
-backports upload.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.