Noble installer's partioner does not recognise existing storage containers and data structures

Bug #2065236 reported by Mike Ferreira
102
This bug affects 22 people
Affects Status Importance Assigned to Milestone
subiquity
New
Undecided
Unassigned
ubuntu-desktop-provision
New
Undecided
Unassigned

Bug Description

Summary: The Desktop Instller's new partitioner, in Manual partitioning mode, does not recognize pre-existing storage containers and data structures. That goes for LVM2, LUKS, and mdadm.

RE: https://bugs.launchpad.net/ubuntu-desktop-installer/+bug/2014977

This started back with Lunar, through Mantic, and carries on into Noble.

In the above bug report, it was mostly about that LVM2 storage containers are not recognized by new partition manager that was chosen for the newer Flutter Installer, that Ubuntu changed to since Lunar.

In the announcement of the (then) new installer on Ubuntu Discourse, the reason given for choosing the new partitioner was that it recognized bitlocker file structures... Guessing they thought that would make it easier for Users to install Ubuntu alongside Windows. Will get back to that later. But also in that announcement, Canoncial siad they were working to improve the new partitioner. I can verify, that I, nor my collegues have seen any improvements, changes, or fixes done to the new partitioner. At least not for manual partitioning, and if it recognizes storage containers. That is still broken through Noble, which is now released.

The above bug report carried on into Mantic, where when we testing DEV-Noble, during that DEV-Cycle, we referred to this bug still being open, so we didn't file another new one on it.

During late DEV-Mantic, early DEV-Nble, I asked a LaunchPad Question asking if anyone knew what the newly used partitioner "was". It was asked again as issues at the Canonical GitHub ubuntu-desktop-installer, by many users in many forms. Those all went unaswered.

Also in the Noble-Dev Cycle, the Canonical ubuntu-desktop-installer Team had a major change of guard, where when they took over, changed the name from ubuntu-desktop-installer, to ubuntu-desktop-privision. It the same installer with the same partitioner.

it's too late for Lunar. That is no longer in support. Mantic is still in support for a little over two months, but in the last bug report, Lanchpad close the bug report, just before the release, saying they would not fix it for Mantic. Saying that Noble no longer used ubuntudesktop installer, but rather used ubuntu-desktop-provision. Remebr that Mantic is still in official support at this moment.

???

It's my understanding that they are the same snap, with the same underlying code. Just changing the buzzwork of the name did not make it completely different. AND, it still has this same problem. It didn't go away in the name change. It didn't go away, and nor am I going away.

Fine. Here is a "new" Bug Report filed against current Ubuntu Noble 24.04 LTS.

I will document it fully. I will recruit all the many Users on the Forum that consider this a major bug with the installer.

Note that some long-time, dedicated Users are saying if this is not fixed, then they and who they support, will not go to Ubuntu 24.04, but rather change back to Debian, who's subiquity installer still recognizes those... Not fixing this will cost us dearly.

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

>> apport-collect 2065236
does not want to summit info. The popup says "no information collected...

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

This is the test case:

mafoelffen@ubuntu-desktop-provision:~$ lsblk -e7 -o name,label,size,fstype,mountpoint
NAME LABEL SIZE FSTYPE MOUNTPOIN
sda 80G
├─sda1 EFI 1000M vfat
├─sda2 2.9G ext4 /boot
└─sda3 76.1G LVM2_member
  ├─vg--root--disk-lv--swap 8G swap [SWAP]
  └─vg--root--disk-lv--root 54.5G btrfs /
sdb 40G
└─sdb1 40G LVM2_member
  └─vg--home-lv--home 32G btrfs /home
sdc 10G
└─sdc1 ubuntu-server:0 10G linux_raid_member
  └─md0 20G
    └─md0p1 20G btrfs /data
sdd 10G
└─sdd1 ubuntu-server:0 10G linux_raid_member
  └─md0 20G
    └─md0p1 20G btrfs /data
sde 10G
└─sde1 ubuntu-server:0 10G linux_raid_member
  └─md0 20G
    └─md0p1 20G btrfs /data
sdf 8G
└─sdf1 538M vfat /boot/efi
sr0 Ubuntu-Server 24.04 LTS amd64 2.6G iso9660

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

With Ubuntu Server 24.04 amd64 ISO:

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

Next on a reinstall and forever after:

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

With Desktop, you can see that the system of the Live Environment sees all:

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

But the partitioner of the Desktop Installer sees nothing:

Revision history for this message
Allen James (aljames) wrote :

I am a newer user of Ubuntu, < 5yrs. I value an installer that can see my LVs. I don’t have to have the installer do the creative partitioning for me, it just needs to see the LVs and allow me to install into them, by choosing where system mountpoints should live, according to my needs. Using 22.04 for now and until the 24.04 installer is less rigid. Fingers crossed.

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

This includes part of this failure: https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2058511

But this bug report is not a duplicate of this bug report, as that bug report only deals with a problem with LVM2 alone,. Not the bigger problem where there is much more wrong, and recognizes much less than thought there.

This could possibly be merged with that bug report... If the bigger problem is included within that bug report, by it's scope expanded.

Revision history for this message
Mr Steven Rogers (gezzer) wrote :

i have found that the 24.04 installer will not even do a clean install on or over a LUK encrypted partition. the Linux encrypted partition has to be completely wiped/formatted before a new install of 24.04 will complete and be usable. Steve Rogers Ubuntu User ..

Revision history for this message
Rick S (1fallen) wrote :

I found it same as geezer but on ZFS had to wipe it clean before a successful install https://ubuntuforums.org/showthread.php?t=2497029

Kind of a show stopper for my production machines :(

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

For #10, I had/submitted a bug report in DEV-Mantic (https://bugs.launchpad.net/subiquity/+bug/2038856), where I found that with pre-existing ZFS, the code for erasing/clean wasn't working as desired. The part of the code that fails and throws an error is "delete_zpool".

I don't see They made a patch to that, but if it doesn't recognize that, then it won't be triggered (right)? That seems to fall into this same, maybe.

That is where I cam up with the work-around/recommendation to zero the disk manually, and that "that" should probably be added as a patch to the code.
>>>
DISK=<unique_disk_name>
sudo blkdiscard -f $DISK
sudo wipefs -a $DISK
sudo sgdisk --zap-all $DISK
>>>
You might want to add yourself to that Report and note that it is still a problem with the Noble Installer. That would bump that report into "confirmed" status, and get that going again.

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

Sidenote: You will note on the 24.04 Server Edition ISO, I had to add another disk, to get around another (what I consider as a Bug) on that installer/partitioner. It see's the storage containers/file structures, BUT...

If you have a pre-existing EFI partition, it will not let you choose that disk as the boot disk, unless it is on a disk other than the root-disk. And once selected/set, it will not allow you to change your choice, it is locked in.

That has been a long-standing Bug on that, but I need to file that separately, as that is a different installer and partitioner.

Revision history for this message
Excalibur (dev-arthurk) wrote :

As a System76/Pop_OS user this is particularly a frustrating use case, as Pop_OS is still based on 22.04 until their new "Cosmic Desktop' is taking its time to be released. So I'm desiring to Dual Boot with a modern Ubuntu install, especially as a KDE user, given how much it has evolved since 22.04.

Pop_OS was OEM installed with LVM and LUKS, so what should be a very simple two-distro dual boot into a second VG is looking to be a very time-consuming endeavor.

Revision history for this message
DiagonalArg (diagonalarg) wrote :

The case of not being able to install over LUKS containers has been reported:

https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2064193
https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2060678

Also, Debian now allows encryption of /boot out of the box. As that's not available with Ubuntu, I have to prepare my disks in advance, which is where I hit this bug.

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

other test case + screenshot of missing LVMs in installer

ubuntu@ubuntu:~$ lsblk -e7 -o name,label,size,fstype,mountpoint
NAME LABEL SIZE FSTYPE MOUNTPOINT
sda 465.8G
└─sda1 465.7G LVM2_member
  └─vgvirtual-vm 350G ext4
sdb 447.1G
└─sdb1 447.1G LVM2_member
sdc 894.3G
└─sdc1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  └─vgrecover-swap 36G swap
sdd 894.3G
└─sdd1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  └─vgrecover-swap 36G swap
sde 894.3G
└─sde1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  ├─vgrecover-swap 36G swap
  └─vgrecover-spotcache spotcache 97.7G ext4
sdf 894.3G
└─sdf1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  ├─vgrecover-swap 36G swap
  └─vgrecover-spotcache spotcache 97.7G ext4
sdg 894.3G
└─sdg1 893.6G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  └─vgrecover-swap 36G swap
sdh Ubuntu 24.04 LTS amd64 28.9G iso9660
├─sdh1 Ubuntu 24.04 LTS amd64 5.7G iso9660 /cdrom
├─sdh2 ESP 5M vfat
├─sdh3 300K
└─sdh4 writable 23.2G ext4 /var/crash
nvme0n1 238.5G
├─nvme0n1p1 64M
├─nvme0n1p2 efi 256M vfat
└─nvme0n1p3 237.9G LVM2_member
  ├─vgdev-root 83G ext4
  └─vgdev-home 150G ext4

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

see screenshot

you can see that the vgdev-home LVM is missing so can't fresh install without erasing all the users personal data or restoring for hours on-end from a backup.

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

please see the screenshot for reference

The previous LTS version of Ubuntu 22.04.4 shows the correct LVM volumes/partitions installed on the machine

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

Curious as to how are people who use the LVM install option from the Advanced Options menu in the 24.04 installer are supposed to reinstall their machine without a complete wipe of their data?

Revision history for this message
Cliff (klfjoat) wrote :

My Ubuntu upgrade from 23.10 to 24.04 got hosed. So I'll just do what I've done half a dozen times before on Ubuntu and install the new version over the borked install, right?

Wrong. This YEARS-LONG bug is preventing me. So I need to do a rescue backup (to supplement my normal backups), wipe, and reinstall.

Might as well just move to a different distro while I'm at it.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.