Ubuntu

Installer hides the master device for dmraid 10 (1+0) configurations

Reported by Pekka Hämäläinen on 2010-04-11
This bug report is a duplicate of:  Bug #311179: libparted hides fakeraid raid10 disks. Edit Remove
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
parted (Ubuntu)
High
Phillip Susi

Bug Description

When using dmraid 0+1, several devices appear in /dev/mapper to describe the strips that form the legs of the mirror, and the mirror itself. The installer only gives the choice of using the two stripes that form the legs, and not the whole mirror. For example, in /dev/mapper:

nvidia_bfcdciea-0 and nvidia_bfcdciea-1 are the stripes, which are then mirrored to create nvidia_bfcdciea. The partitions are then detected on nvidia_bfcdciea as nvidia_bfcdciea[1234]. See the screen shot in comment #24 for what Ubiquity shows during the partitioning stage.

Workarounds
===========

Build a new libparted during the install:
 - boot the live cd
 - open a terminal and do:
$ sudo apt-get build-dep parted
$ cd /tmp
$ apt-get source parted
edit debian/patches/dmraid.patch

look for the line
+ if (_is_dmraid_major(buf)) {
(its about 20 lines into the patch)
and change it to be
+ if (1) {

Then run
$ dpkg-buildpackage -rfakeroot -us -uc
and finally
$ sudo dpkg -i ../libparted0debian1_*

To test the workaround is in place run
$ sudo parted_devices

This should now show (too many) devices but include the root one (e.g. /dev/mapper/isw-someuuid_NAME0).

Run ubiquity and the install will now let you choose the right device.

Phillip Susi (psusi) wrote :

This is probably a duplicate of bug #534743. Can you try booting with the nosplash break nodmraid options, then when you hit the busybox prompt, run dmraid -ay then exit. If the system boots up normally at that point then it's that bug and I will mark this as a duplicate.

Changed in dmraid (Ubuntu):
status: New → Incomplete

Hello

I can confirm that for 2.6.32-20-generic kernel the "dmraid -ay" made
raid volumes active. I did not try to continue booting though.
However before I managed to get initramfs boot I needed to edit grup
"menu.lst" by hand. The maintainers version was faulty i.e.
root (hd0,0) --> should be (hd0,2)
and
initrd row was missing completely

I tried also the old kernel version (from 804 LTS) (grups backup option)
but there dmraid -ay did not work.

Finally, I could not get decently out from initramfs and continue
booting. Normal "exit" did not work but initramfs complained that raid
partition was missing ...

Br Pekka

On 11.4.2010 23:39, Phillip Susi wrote:
> This is probably a duplicate of bug #534743. Can you try booting with
> the nosplash break nodmraid options, then when you hit the busybox
> prompt, run dmraid -ay then exit. If the system boots up normally at
> that point then it's that bug and I will mark this as a duplicate.
>
>
> ** Changed in: dmraid (Ubuntu)
> Status: New => Incomplete
>
>

Lets activate this one.

Today I tried once again upgrade from 8.04 to 10.04.
Upgrade was "dirty" i.e. several warnings and errors were issued, none of them were related to dmraid. Finally upgrade went through.

After reboot the system started but failed to boot properly and dropped to initramfs. As with earlier trials dmraids were not active but were successfully activated with command "dmraid -ay". Clearly either dmraid or update.manager fails to set raids in active state.

Please advice on how to continue testing here.

Br Pekka

Phillip Susi (psusi) wrote :

So were you able to run dmraid -ay then exit in the initramfs busybox and continue to boot normally after that?

Yep, correct.
So far the installation is stable. Raid seems to be up and structures
behind /dev/disk are in place (problem reported in #378429).

Br Pekka

> So were you able to run dmraid -ay then exit in the initramfs busybox
> and continue to boot normally after that?
>
> --
> Upgrade fails from 8.04 LTS to 10.04 LTS (beta)
> https://bugs.launchpad.net/bugs/560748
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in “dmraid” package in Ubuntu: Incomplete
>
> Bug description:
> Binary package hint: dmraid
>
> I tried to upgrade from 8.04 LTS (up to date 10-04-11) to 10.04 (Beta).
> Upgrade fails as system could not read root partition.
> My configuration at 8.04 is using dmraid with 0+1 configuration and with
> nVidia chip.
> Apparently the upgrade procedure didn't include support for dmraid but
> after upgrade + reboot the system failed and falled back to init-ram-fs
> prompt. I didn't try further but I guess I would have managed to install
> dmraid manually later.
>
> ProblemType: Bug
> Architecture: amd64
> Date: Sun Apr 11 17:55:45 2010
> DistroRelease: Ubuntu 8.04
> NonfreeKernelModules: fglrx ath_hal
> Package: dmraid 1.0.0.rc14-0ubuntu3.1
> PackageArchitecture: amd64
> ProcEnviron:
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
> LANG=en_US.UTF-8
> SHELL=/bin/bash
> SourcePackage: dmraid
> Uname: Linux 2.6.24-27-generic x86_64
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/560748/+subscribe
>

On 5/17/2010 3:25 PM, Pekka Hämäläinen wrote:
> Yep, correct.
> So far the installation is stable. Raid seems to be up and structures
> behind /dev/disk are in place (problem reported in #378429).

I don't think you understood my question. Every time you boot, do you
get dropped to the busybox shell and have to run dmraid -ay then exit to
continue? If so, are you sure you have to run dmraid -ay, or have you
tried to just wait a moment, then exit without running dmraid and see if
the system boots at that point?

I waited at initramfs prompt for 5 - 10 minutes, didn't give any
commands, then I tried to exit.
This fails basically stating that root device is not there. Which is of
course true as it should come up with dmraid.

So, with giving explicitly the command dmraid -ay in initramfs the
system is able to continue booting and seems to be ok from dmraid
viewpoint. If no commands are given the boot is stuck in initramfs prompt.

Br Pekka

On 17.5.2010 22:42, Phillip Susi wrote:
> On 5/17/2010 3:25 PM, Pekka Hämäläinen wrote:
>
>> Yep, correct.
>> So far the installation is stable. Raid seems to be up and structures
>> behind /dev/disk are in place (problem reported in #378429).
>>
> I don't think you understood my question. Every time you boot, do you
> get dropped to the busybox shell and have to run dmraid -ay then exit to
> continue? If so, are you sure you have to run dmraid -ay, or have you
> tried to just wait a moment, then exit without running dmraid and see if
> the system boots at that point?
>
>

Hello

I tried upgrade from 8,04 to 10.04 again (latest fixes) w.o luck. 3 problems were discovered
- dmraid don't activate raids after upgrade but drops to initramfs boot. I can wait forever ...
- giving dmraid -ay and exiting initramfs gets boot continuing

However, upgrade was not clear either. I encounter bout fglrx and libpam-runtime problems and as result it is not easy to say which is the root cause or are these un-related events.

I am bit frustrated as upgrade is still not working after 6-7 months trials and it seems that there are little interest ...

Danny Wood (danwood76) wrote :

Packages like FGLRX wont work upon upgrade if you have a card that is not supported by the latest version of ATIs driver.
Running unsupported pacakges will give you errors.

I have a feeling your initramfs isn't being created properly.

If you are able to boot can you give us the version info of dmraid?
sudo dmraid --version

Also you can try updating the initramfs and updating grub.
sudo update-initramfs -u -k all
sudo update-grub

Ok.

I have loaded 804 backup again and updated that with latest corrections.
So the "from state" is 804 as today. I will remove fglrx completely to
avoid those problems in upgrade. ATI chip is oldish, 2600 series, but
support in 804 is ok. There is separate bug-thread on this fglrx problem
(642518) so I suppose there is nothing special which I have encountered
here. dmraid is as follows in from state:

dmraid version: 1.0.0.rc14 (2006.11.08)
dmraid library version: 1.0.0.rc14 (2006.11.08)
device-mapper version: 4.12.0

I will now try the upgrade again.

Br Pekka

On Wed, 2010-09-22 at 06:06 +0000, Danny Wood wrote:
> Packages like FGLRX wont work upon upgrade if you have a card that is not supported by the latest version of ATIs driver.
> Running unsupported pacakges will give you errors.
>
> I have a feeling your initramfs isn't being created properly.
>
> If you are able to boot can you give us the version info of dmraid?
> sudo dmraid --version
>
> Also you can try updating the initramfs and updating grub.
> sudo update-initramfs -u -k all
> sudo update-grub
>

Ok, some data about upgrade
- fglrx removed, only libpam-runtime gave permanent error (bug 642591)
- run initramfs and update-grub as suggested
- dmraid version in "to state"
 dmraid version: 1.0.0.rc16 (2009.09.16) shared
 dmraid library version: 1.0.0.rc16 (2009.09.16)
 device-mapper version: 4.15.0
- at boot dmraid fails to start automatically, "dmraid -ay" at initramfs
promt helps

- just an idea: what should be right configuration points (files) I
should look for dmraid and initramfs for 10.04? These have been subject
for quite changes since I started to use dmraid in this system

Br Pekka

On Wed, 2010-09-22 at 06:06 +0000, Danny Wood wrote:
> Packages like FGLRX wont work upon upgrade if you have a card that is not supported by the latest version of ATIs driver.
> Running unsupported pacakges will give you errors.
>
> I have a feeling your initramfs isn't being created properly.
>
> If you are able to boot can you give us the version info of dmraid?
> sudo dmraid --version
>
> Also you can try updating the initramfs and updating grub.
> sudo update-initramfs -u -k all
> sudo update-grub
>

What version of the dmraid package do you have installed? Check with apt-cache policy dmraid.

  Unfortunately the system is again in "unbootable" mode i.e. I don't
get login prompt at all. However the following I recorded while I still
could log on:

- dmraid version in "to state"
 dmraid version: 1.0.0.rc16 (2009.09.16) shared
 dmraid library version: 1.0.0.rc16 (2009.09.16)
 device-mapper version: 4.15.0
- the above should be the main latest from 10.04.1 from yesterday. Does this help at all?

On 22.9.2010 22:15, Phillip Susi wrote:
> What version of the dmraid package do you have installed? Check with
> apt-cache policy dmraid.
>

Are you able to do a fresh install and have it boot correctly?
(To distinguish if this is an upgrade issue or a dmraid bug)

Haven't had time to do that so far. Reason for trying to get this
upgrade working is that I have a server which has very similar
configuration as the desktop pc I am working now. I know for sure that
doing fresh install for the server would kill me. On this desktop the
fresh install is of course an option which shouldn't be too heavy.

On Thu, 2010-09-23 at 05:24 +0000, Danny Wood wrote:
> Are you able to do a fresh install and have it boot correctly?
> (To distinguish if this is an upgrade issue or a dmraid bug)
>

Yes, it would be good to know if it is only an upgrade issue or not. Also you should be able to check the package version by booting from a livecd and chrooting into the hard disk. My question is whether you got the version in 10.04 or 10.04.1.

  Ubuntu is 10.04.1, thats the latest available.

I am working on burning the livecd (or usb) to see whether its upgrade
or dmraid. Probably doable during weekend.

On 23.9.2010 18:23, Phillip Susi wrote:
> Yes, it would be good to know if it is only an upgrade issue or not.
> Also you should be able to check the package version by booting from a
> livecd and chrooting into the hard disk. My question is whether you got
> the version in 10.04 or 10.04.1.
>

here is the data:
dmraid:
  Installed: 1.0.0.rc16-3ubuntu2
  Candidate: 1.0.0.rc16-3ubuntu2
  Version table:
 *** 1.0.0.rc16-3ubuntu2 0
        500 http://www.nic.funet.fi/pub/mirrors/archive.ubuntu.com/
lucid/main Packages
        100 /var/lib/dpkg/status

On Thu, 2010-09-23 at 15:23 +0000, Phillip Susi wrote:
> Yes, it would be good to know if it is only an upgrade issue or not.
> Also you should be able to check the package version by booting from a
> livecd and chrooting into the hard disk. My question is whether you got
> the version in 10.04 or 10.04.1.
>

Ok

I created an installation media for 10.04.1. When I boot with this and try to "install" I can't see any disc partitions. If I open terminal and give direct command "dmraid -ay" and then try to "install" all disc partitions are visible for selection.

So, I conclude that dmraid is not able to correctly detect NVIDIA nForce 430 and activate raids which are defined there.

Let me know if any traces are needed.

Phillip Susi (psusi) wrote :

I'm not sure why you came to that conclusion. If dmraid -ay said it found and activated an array, and then you were able to see it in the installer, then it DID activate the array. The question is, why didn't it to so automatically? You aren't booting with the nodmraid option are you?

  Hello,

Yep, I was probably too hasty to draw conclusions. This is the long
version what happened.
I took the normal amd_64 bit desktop distro from Ubuntu site and made
usb startup disk out of that. Then I booted without any options. At
point when "install" was offered I opted that and found my way until
partman was supposed to show the available partitions - well nothing
there. Then I booted second time and opted booting up the system w.o.
install. All ok and when the system was up I opened terminal and gave
"dmraid -ay" which activated raids. As third step I tried "install" from
desktop - I found my way to disc partitions which were visible now -
however installer didn't allow me to install the system to wanted
existing partition nor it offered correctly swap. So - I could not
install the system that way either. BTW deleting the raids is not a good
option as my system is dual-boot and I am running win7 there.

Currently the linux partitions are in unusable state. I can boot the usb
startup and do the mount as described. Then I have the option to untar
the 804 backup again and get to starting point.

Please let me know what info you need - I am now pretty well prepared to
dig into system.

Br Pekka

On 27.9.2010 18:06, Phillip Susi wrote:
> I'm not sure why you came to that conclusion. If dmraid -ay said it
> found and activated an array, and then you were able to see it in the
> installer, then it DID activate the array. The question is, why didn't
> it to so automatically? You aren't booting with the nodmraid option are
> you?
>

Phillip Susi (psusi) wrote :

On 9/27/2010 11:54 AM, Pekka Hämäläinen wrote:
> desktop - I found my way to disc partitions which were visible now -
> however installer didn't allow me to install the system to wanted
> existing partition nor it offered correctly swap. So - I could not

I don't understand this part. What do you mean it didn't allow you to
install the system? If it shows the partitions, click on the one you
want to install to and set its mount point, fs, and tick the format box.

Ok,

now I am tried the installation againg - and I can confirm that I can't proceed the installation as disks are detected wrongly. Attachment Fail100929.zip xontains two screenshots. 1.png shows the disk configuration which is in this case raid 0+1. I can set the target "/" partition as ...ciea-03, and swap should go to ...ciea-05. However, as can be seen in the second screenshot, 2.png, the swap is tried to be installed at raid roots e.g. ...ciea-0 and ciea-1 respectively. I will not go forward to wipe out the whole raid system.

Phillip Susi (psusi) wrote :

What is your raid configuration? I'm trying to figure out why it shows a -0 and -1 disk that both seem to have the same partition layout.

  It's raid "0 + 1". I've 4 identical discs. There is stripe across the
disks (raid 0) which is then mirrored (raid1). That's normal
configuration for that "old" nvidia chip and called as raid 0+1 to my
knowledge. As far I understand the -0 and -1 are the heads of those stripes.

On 30.9.2010 19:24, Phillip Susi wrote:
> What is your raid configuration? I'm trying to figure out why it shows
> a -0 and -1 disk that both seem to have the same partition layout.
>

Phillip Susi (psusi) wrote :

On 9/30/2010 3:55 PM, Pekka Hämäläinen wrote:
> It's raid "0 + 1". I've 4 identical discs. There is stripe across the
> disks (raid 0) which is then mirrored (raid1). That's normal
> configuration for that "old" nvidia chip and called as raid 0+1 to my
> knowledge. As far I understand the -0 and -1 are the heads of those stripes.

Ahh, then I think we have hit on the real nature of the problem. It
seems that the raid 0+1 is not working properly. Can you post the
output of dmraid -n and ls /dev/mapper?

Here are the views, see files_101001.zip

Phillip Susi (psusi) wrote :

I can't tell from the screen shot if you could scroll down in the disk selection in the installer. See if you can scroll down and see just the nvidia_bfcdciea or nvidia_bfcdciea[1234] disks. Those are what you want to use, not the ones with -0 or -1 in them.

I think this boils down to Ubiquity being confused by dmraid disks.

  I think these are complete as such i.e. I took "dmraid -n >
pla_pla.txt" and "ls - la /dev/mapper > pla_pla2.txt" which are the
files in the attachment.

I am not familiar with internal structures of dmraid and ubiquity.
However, please let me know any detailed areas and commands, I am
prepared to spend time and dig the needed data for you. E.g. anything in
/proc /sys /uuid structures that would help??

On 1.10.2010 22:21, Phillip Susi wrote:
> I can't tell from the screen shot if you could scroll down in the disk
> selection in the installer. See if you can scroll down and see just the
> nvidia_bfcdciea or nvidia_bfcdciea[1234] disks. Those are what you want
> to use, not the ones with -0 or -1 in them.
>
> I think this boils down to Ubiquity being confused by dmraid disks.
>

BTW did you mean that I adjust the selection in the install screens? Well, that option is not available in ubiquity either. I could choose to set root to nvidia_bfcdciea3 w.o. problems but the swap partitions didn't follow the partition type setting. Swap partitions are shown correctly in nvidia_bfcdciea5 but when installer tries to continue the next screen shows that it is about to format nvidia_bfcdciea-0 and ...-1 i.e. the stripe heads; which is horribly wrong.

Phillip Susi (psusi) wrote :

Since the proper device (nvidia_bfcdciea) shows up in /dev/mapper it seems that dmraid is working fine. If it isn't shown as a choice in the installer, then it looks like a bug in Ubiquity, so I'm reassigning the package.

affects: dmraid (Ubuntu) → ubiquity (Ubuntu)
Changed in ubiquity (Ubuntu):
importance: Undecided → Medium
status: Incomplete → Triaged
summary: - Upgrade fails from 8.04 LTS to 10.04 LTS (beta)
+ Installer does not give correct choice of device to install to when
+ using dmraid 0+1
Phillip Susi (psusi) on 2010-10-02
description: updated

I agree that lets take these problems down 1-by-1. However I remind that there was the original problem that raids were not set active in 1st place. So, before I could aproach installer I had to manually activate "dmraid .ay". And that was also the probelm with 804 -> 1004 upgrade i.e. boot sequence dropped to initramfs were manual command recovered the discs.

A bit continuing from #33: if we assume that dmraid is working correctly, then the question is why initramfs is created during upgrade wrongly i.e. dmraid does not activate automatically OR in case of live cd - why dmraid doesn't get activated but manual command is needed to activate raids?

Still, I think we have strongest leads to ubiquity right now so I agree with Phillip that lets try to nail this down from this corner 1st.

BTW, it seems that UUID (bug 378429) structure is ok in 10041 - so most probably we can close that another bug during the course when we nail down this current one.

Ok, seems that this is dead end. Too rare configuration to generate interest to be fixed. As result though Ubuntu 10.04.1 is completely un-installable for nvidia 430 raid0+1 systems and also clear downgrade as this configuration is working in 8.04. I feel frustrated and need to start investigating other OS'es and HW ...

Phillip Susi (psusi) wrote :

Pakka, can you post the output of sudo blkid?

Hello Philip,

lets discuss whether it makes sense -

I finally made the decision to roll - over my dual-boot (win7/ubuntu)
system and I installed raid 5 instead of raid 0 + 1. At this time I
could install ubuntu 10.04 without problems.

So, I am definite that since 8.04 all the way to 10.04 raid 0+1 support
deteriorated. At what exact point, that I can't say.

Please remember, this is nvidia 430 chip so the raid 0 + 1 is nvidia
specific.

Is there still something that you need me to look in my working 10.04
raid 5 system?

Br Pekka

On 7.3.2011 3:05, Phillip Susi wrote:
> Pakka, can you post the output of sudo blkid?
>

No, what I need is a raid0+1 system to test with. Maybe I'll have to set up a virtual machine.

Phillip Susi (psusi) wrote :

The bug is actually in parted, specifically in dmraid.patch. This Ubuntu patch tries to identify dmraid partitions and remove them from the list of disks that parted generates. It incorrectly identifies the raid10 device as a partition and so removes it from the list.

I have started a discussion on the dm-devel mailing list about a proper way to differentiate between the whole disk device and partition devices.

affects: ubiquity (Ubuntu) → parted (Ubuntu)
Phillip Susi (psusi) on 2011-05-02
Changed in parted (Ubuntu):
assignee: nobody → Phillip Susi (psusi)
importance: Medium → High
status: Triaged → In Progress
summary: - Installer does not give correct choice of device to install to when
- using dmraid 0+1
+ Installer hides the master device for dmraid 10 (1+0) configurations
description: updated
Robert Collins (lifeless) wrote :

Where is this mailing list?

For ICH*R controllers - they generate /dev/mapper/isw_$UUID$NAME$index
as the root node for arrays.
and append -$index for subordinate nodes
and p$index for partitions

so
/dev/mapper/isw_iuiewfDEMO0
is a root

/dev/mapper/isw_iuiewfDEMO0p1
is not
/dev/mapper/isw_iuiewfDEMO0-0
is not

/dev/mapper/isw_iuiewfDEMO1
is

etc.

Robert Collins (lifeless) wrote :

(Oh, and though this bug is about nvidia implementations, the intel one appears suffers the same bug when using the blocks/.../slaves approach: the actual root is a slave of two mirror sets.

Robert Collins (lifeless) wrote :

See also bug 803658 - grub2 appears unready for this configuration as well.

Phillip Susi (psusi) wrote :

<email address hidden> is the mailing list. I did not get much feedback, but it sounds like the way to go is to somehow encode the usage of the device into its UUID.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers