root on lvm fails to boot.

Bug #29858 reported by Anders Aagaard
82
This bug affects 10 people
Affects Status Importance Assigned to Milestone
initramfs-tools (Ubuntu)
Expired
High
Unassigned
Nominated for Lucid by Lenin

Bug Description

Tried in vmware to do a clean install of dapper flight 3 and select "erase all my drives and use lvm" and it fails to boot.

Error:
ALERT! /dev/mapper/Ubuntu-root does not exist. Dropping to a shell!

Two things here, first of all, it doesn't drop me to a shell. And second, I've noticed in /usr/share/initramfs-tools/scripts/* some lvm scripts exist, but there is no hook for it. And unpacking the initrd I find no vgscan/vgchange and stuff like that.

Revision history for this message
Jeremy Thornhill (jeremy-thornhill) wrote :

I tried dist-upgrading from Breezy with a LVM root and the received similar errors with an unbootable system. After stumbling around for a bit I decided to just do a clean install from Dapper media, which worked fine for me (again with a LVM root).

Revision history for this message
Swen Thümmler (swen-thuemmler) wrote :

I had the same probleme with a system dist-upgraded from breezy. There is
no /dev/mapper directory, and vgchange -a y says "No volume groups found".
Fortunately, I could still boot with an older kernel, and
dpkg-reconfigure linux-image-2.6.15-14-686 fixed it.

Revision history for this message
Allcolor (quentin-anciaux) wrote :

I had this problem just after an apt-get dist-upgrade from breezy to dapper.

The problem seems to come from an incompatibility of the kernel 2.6.12 of breezy with the new udev (or something linked to the creation of dev node).

Anyway here is how I resolved it:

- Use the ubuntu install disk to boot, proceed to installation till partitionning screen, press ESC.
- Choose to open a shell in the menu.
- type :
modprobe dm-mod
vgchange -a y
mkdir t
mount /dev/mapper/Ubuntu-root /t
chroot t
mount /dev/hda1 /boot
apt-get install linux-image

This will install a 2.6.15 kernel from dapper. Reboot on this kernel. Voila it work.

Revision history for this message
condor33 (condor33) wrote :

I had this problem upgrading from breezy to dapper: at booting I have this message /dev/hda5 does not exist. I think it depends by installation of a new kernel image with the new dapper udev, the generated initrdramfs lacks of right /dev/*** links. What can I do now? My pc is unbootable...

Matt Zimmerman (mdz)
Changed in initramfs-tools:
assignee: nobody → keybuk
Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

These all sound like partial/failed initramfs images

Changed in initramfs-tools:
assignee: keybuk → adconrad
Revision history for this message
Alexandre Otto Strube (surak) wrote :

There's an option now on update-manager to perform a upgrade from breezy to dapper. Does this system handles this bug correctly? I don't have lvm, so I can't confirm.

Loic Pefferkorn (loic)
Changed in initramfs-tools:
status: Unconfirmed → Confirmed
Revision history for this message
Matt Zimmerman (mdz) wrote :

Adam, do you believe this bug to be part of the class which should be fixed by this change?

initramfs-tools (0.40ubuntu29) dapper; urgency=low

  * Make "update-initramfs -u" try to find the running kernel *after* it
    attempts to search the symbolic link list and its own sha1 list.
    Using this as a fallback, rather than the default, should solve most
    upgrade issues, where people found their initramfs was half-baked.

 -- Adam Conrad <email address hidden> Wed, 19 Apr 2006 13:51:35 +1000

If not, please gather details from the bug submitters and find out what happened.

Revision history for this message
Adam Conrad (adconrad) wrote :

Well, this class (half-baked initrd) of bug manifests in two ways. The first way is that when doing an upgrade of a system to a new kernel *and* new udev, depending on the order the packages were unpacked, you may have end up with a new kernel but an old udev in the initramfs. The above change fixed that, so dapper kernels should always work now after a breezy->dapper upgrade.

The second way this fails, though, is that all the packages calling "update-initramfs -u" during upgrade will upgrade the OLD kernel's initrd if the new kernel hasn't been installed yet, so the old kernel can become unbootable (thanks to udev not being backward-compatible). Other than backing out all the update-initramfs magic, or getting dpkg hooks, I'm not sure how best to solve this case.

We can probably get the upgrade tool to try to intelligently order upgrades to work around this, but that won't solve the classic "apt-get dist-upgrade" case.

Revision history for this message
Matt Zimmerman (mdz) wrote : Re: [Bug 29858] Re: root on lvm fails to boot.

On Thu, May 04, 2006 at 01:07:46AM -0000, Adam Conrad wrote:
> Well, this class (half-baked initrd) of bug manifests in two ways. The
> first way is that when doing an upgrade of a system to a new kernel *and*
> new udev, depending on the order the packages were unpacked, you may have
> end up with a new kernel but an old udev in the initramfs. The above
> change fixed that, so dapper kernels should always work now after a
> breezy->dapper upgrade.
>
> The second way this fails, though, is that all the packages calling
> "update-initramfs -u" during upgrade will upgrade the OLD kernel's initrd
> if the new kernel hasn't been installed yet, so the old kernel can become
> unbootable (thanks to udev not being backward-compatible). Other than
> backing out all the update-initramfs magic, or getting dpkg hooks, I'm not
> sure how best to solve this case.

Could we have udev's initramfs hook bail out somehow, leaving the old initrd
in place, if the kernel version isn't compatible? It's only udev which
should break it, right?

update-initramfs seems like the simplest place to fix this, though. We
should be able to detect the situation where the running kernel (its
upstream version number, anyway) doesn't match the "current" one (where the
symlink points) and continue successfully with a warning. How about that?

--
 - mdz

Revision history for this message
bodinux (lbtemp) wrote : disk order changed, evms_activate fix didn't work on breezy upgrade

I have a pci IDE card in my computer.
Before the upgrade from Breezy to Dapper my disks would be numbered :
mother board ide 0 : hda
mother board ide 1 : hdb
pci ide 0 : hde

With the new kernel :
pci ide 0 : hda
mother board ide 0 : hde
mother board ide 1 : hdf

Even after using the 'evms_activate' fix, the pc wouldn't find the correct root filesystem due to the disk order change.

My fixes :
Bad fix : use the older kernel
Good fix : forget the upgrade, install again from the live cd

I would rather have the old order of my disks because now I can't remove the pci ide card without disturbing the whole system.

--
Bodinux

Revision history for this message
Stan (sklein-cpcug) wrote :

I'm trying to boot Ubuntu from the boot partition set up by my primary Fedora Core 5 installation, where Ubuntu is installed (via the alternate install iso) on an LVM partition as a secondary Linux distro.

I had tried multiple things to get Ubuntu to boot on my system and finally arrived at copying the Ubuntu kernel and initrd to the same /boot set up by FC5. Ubuntu is installed on VolGroup00/LogVol03. My grub kernel root statement is root=/dev/mapper/VolGroup00-LogVol03/. The boot kept hanging at "Waiting for root file system". I followed the advice of threads in the Ubuntu forums and waited. I then got a message that /dev/mapper/VolGroup00-LogVol03/ does not exist, and the initrd brought up a shell with BusyBox.

Just before hanging there was a mesage that four partitions were now active in VolGroup00.

I looked at the /dev of the initrd, and there were /dev/mapper/VolGroup00-LogVol03 and /dev/VolGroup00/LogVol03. So the root filesystem is really there! I then created a mount point /mnt/ubuntu and tried to mount the filesystem. It kept telling me the filesystem didn't exist until I tried "mount -t ext3 /dev/mapper/VolGroup00-LogVol03 /mnt/ubuntu" and it mounted. So it isn't detecting the filesystem type, needs to be told what it is, and is giving a misleading message that the root filesystem simply doesn't exist.

Now the issue is how to tell the kernel that the filesystem is ext3, just like I did with the mount from the initrd. I have tried to add various statements such as rootfstype=ext3 to the kernel statement in grub, without success. I tried it in various places on the line (e.g., before and after the root= statement) with no success. I tried a statement rootflags="-t ext3" but that didn't work either. I also tried rootfstype statements with ext3fs and ext2 without success.

Apparently the boot process is correctly recognizing and setting up the lvm partition, but is somehow failing to recognize and mount the file system there.

Revision history for this message
Joey Stanford (joey) wrote :

I have also experienced this problem (as Stan has described) but with a fresh install of Dapper alt install. Everything works according to plan (and according to the wiki entry) but then I fall into the same pit: "It kept telling me the filesystem didn't exist until I tried "mount -t ext3 /dev/mapper/VolGroup00-LogVol03 /mnt/ubuntu" and it mounted. So it isn't detecting the filesystem type, needs to be told what it is, and is giving a misleading message that the root filesystem simply doesn't exist."

Revision history for this message
Michael Flaig (mflaig) wrote :

This could be a problem with your initrd. Are you seeing /dev/mapper devices within your initramfs? Maybe initramfs-tools just forgot to include lvm support in your initrd?

Revision history for this message
Henrik Hjelte (henrik-evahjelte) wrote :

Still there on Feisty.
I just had this problem (Stans, except I only have a boot partition and a ubuntu partition, no Fedora) when upgrading to feisty using apt-get dist-upgrade. After rebooting, feisty doesn't see my VolumeGroup /dev/mapper/vg1, stalling around the the line look"device-mapper: 4?-ioctl initialized.
 have finally managed to boot into an old kernel 2.6.15-26. This was actually a surprise, when booting into the old kernel the boot procedure stalled about five minutes (something about waiting for raid), but now finally I have a root terminal open. Sorry I can't be more specific now, but I am afraid to reboot again until I have solved the problem. Now at least I have got some clues from the answers above, thanks.

If apt-get dist-upgrade is an obsolete and dangerous method of updating, maybe one could add a clear Warning message when trying to do it? Now I read afterwards on a forum that there is a new safer method of updating, I should probably have tried that instead. It would be nice if apt-get had told me too. Maybe a script that filters out dist-upgrade and shows warnings before calling the real apt-get.

Adam Conrad (adconrad)
Changed in initramfs-tools:
assignee: adconrad → nobody
Revision history for this message
Björn Lindqvist (bjourne) wrote :

I filed a similar bug here https://bugs.launchpad.net/ubuntu/+source/linux/+bug/255477 for hardy. Might be a dupe? The problem is still present though.

Revision history for this message
Djamu (djamu) wrote :

Is there a resolution available for this ?

I upgraded server from 8.04.1 > 8.10 and am using a LVM root with snapshots.
unfortunately the new 2.6.27-7-server kernel fails to mount the LVM root ( halts after initramfs )
older kernel ( 2.6.24-21-server ) does work....

Revision history for this message
Lenin (gagarin) wrote :

This bug affects me with 2.6.32-19.28 kernel on lucid. Kernel 2.6.31-17 works though...

Revision history for this message
Ruben Verweij (ruben-verweij) wrote :

This bug affects me with 2.6.32.21.22 on lucid.
With kernel 2.6.32.19 it does work for me, plymouth asks for the pass phrase and I can successfully login.

Revision history for this message
Hark (cab902) wrote :

On lucid server x86-64 after "aptitude upgrade" system was left unbootable. mount in initramfs said "device not found" for /dev/mapper/ubuntu-root. I fixed it after adding "rootfstype=ext4" in /boot/grub/grub.cfg.

Revision history for this message
Henning Eggers (henninge) wrote :

I marked bug 255477 as a duplicate of this. I anyone disagrees, please change it back.

This bug has been hanging around for a long time and for those who get bitten by it it means server down time at console level (which is always a pain) and the need to repair the system with an alternate boot media. Here is what happened to me and how I fixed it:

1. Installed a fresh Ubuntu 10.04.1 (i386) through the network because the box has no optical drive and the BIOS is too old for USB booting. I install it with an LVM root.
2. A few weeks later I install updates (apt-get update; apt-get upgrade) which pull in a new kernel. I admit, I did not watch the update closely.
3. As requested by the update, I eventually reboot.
4. The box remains as silent as a brick on the network.
5. I pull it out, attach a console, boot it and see the "ALERT! /dev/mapper/...-root does not exist." and drop to a shell in the initrd.
6. In the initrd I see no sign of lvm tools. I also don't see the /boot partition but that's besides the point, I think.
7. I reboot and go to the grub menu - I don't have an old kernel to chose from !!!
8. I start the installation (through the network) again and enter rescue mode.
9. Eventually I end up on the rescue shell with my root partition mounted.
10. Dpkg-query tells me that lvm2 is not installed. What?
11. I apt-get install lvm2 which automatically runs update-initramfs ... That looks promising.
12. I reboot and all is well! Woohoo!
13. The box goes back into its corner, headless.

I described this in so much detail to make it clear that this little bug means a lot of work for somebody running an Ubuntu server and is a big annoyance. I don't think something like this should happen and something needs to be figured out. It is not a bug in initramfs-tools AFAICT but more in the dependencies as somebody on bug 255477 already mentioned. Can somebody please add the right project or package that this needs to be fixed in, then mark the bug invalid for initramfs-tools?

I will set this bug to critical in Ubuntu because I think it really is and also to maybe draw some attention to it. Whoever downgrades it should please be so kind to explain why it is not. Thank you. ;-)

Revision history for this message
Henning Eggers (henninge) wrote :

Rats, cannot do that. ;)

Revision history for this message
Vince (tucsonclimber) wrote :

I was affected by this bug on a lucid (10.04) two step upgrade from 9.04 (Jaunty) (Jaunty to Karmic to Lucid) with an active root snapshot. I found that the 2.6.31.-22 kernel worked consistently and all later kernels attempted (2.6.32-22 and 2.6.32-25) failed in one of two ways:

The most common error was a drop to the shell after reporting a bad block on the root filesystem mount. I was never able to recover from this, even with manual mounts (always successful), mount moves and chroot.

The less common error was the cannot find /dev/mapper/rootvg-rootlv from the wait-for-root procedure. In these cases, I was always able to just type exit and it would retry (successfully) the failed mount.

In both cases, the wait-for-root call would take a full 30 seconds (or longer with rootdelay) - it was NOT detecting the volume before the timeout in any case.

After much experimentation with rootfstype, rootdelay, etc. I finally decided to remove the snapshot of the root volume that I had allocated prior to the upgrade and have booted successfully since then.

Please note - there was a snapshot of the root LV that I had created prior to the upgrade (for safety), but I was mounting the base LV, not the snapshot.

I have since recreated a snapshot of the root volume again with no problems booting.

I would conclude from this that there is a timing problem between the registration of the volumes with DM and when the volumes are actually usable. In some cases, the /dev/mapper links would not be created within the timeout, and in others the links would be created, but the read of the superblock would return garbage. I assume the differences between the kernel versions are related to this timing, or perhaps addition of additional threading that created a race condition. Further, the fact that the 25% full snapshot (the original) failed while the mostly empty snapshot succeeded would indicate the timing problem is related to the number of changed pages in the snapshot. The fact that manipulating the rootdelay never affected the problem (except to increase the time it took to present) indicates that the race-condition is somehow related to a lock held by the wait-for-root.

I did not encounter any problems with the upgrade of the LVM configurations or packages between Jaunty and Lucid as described by others - the initramfs configurations were all correct (with the possible exception of wait-for-root NEVER completing before the timeout)

Revision history for this message
Milko Krachounov (exabyte) wrote :

Djamu and Vince (tucsonclimber): I think you're experiencing a bug that's different than this one (bug #360237). I don't have the bug described here, but I was bitten by #360237.

Revision history for this message
Vince (tucsonclimber) wrote :

Milko - you are correct - bug #360237 exactly describes my problem.

Revision history for this message
Phillip Susi (psusi) wrote :

Is this still happening for anyone?

Changed in initramfs-tools (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for initramfs-tools (Ubuntu) because there has been no activity for 60 days.]

Changed in initramfs-tools (Ubuntu):
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.