Noble installer's partioner does not recognise existing storage containers and data structures

Bug #2065236 reported by Mike Ferreira
252
This bug affects 54 people
Affects Status Importance Assigned to Milestone
subiquity
New
Undecided
Unassigned
ubuntu-desktop-bootstrap
New
Undecided
Unassigned
ubuntu-desktop-provision
New
Undecided
Unassigned

Bug Description

Summary: The Desktop Instller's new partitioner, in Manual partitioning mode, does not recognize pre-existing storage containers and data structures. That goes for LVM2, LUKS, and mdadm.

RE: https://bugs.launchpad.net/ubuntu-desktop-installer/+bug/2014977

This started back with Lunar, through Mantic, and carries on into Noble.

In the above bug report, it was mostly about that LVM2 storage containers are not recognized by new partition manager that was chosen for the newer Flutter Installer, that Ubuntu changed to since Lunar.

In the announcement of the (then) new installer on Ubuntu Discourse, the reason given for choosing the new partitioner was that it recognized bitlocker file structures... Guessing they thought that would make it easier for Users to install Ubuntu alongside Windows. Will get back to that later. But also in that announcement, Canoncial siad they were working to improve the new partitioner. I can verify, that I, nor my collegues have seen any improvements, changes, or fixes done to the new partitioner. At least not for manual partitioning, and if it recognizes storage containers. That is still broken through Noble, which is now released.

The above bug report carried on into Mantic, where when we testing DEV-Noble, during that DEV-Cycle, we referred to this bug still being open, so we didn't file another new one on it.

During late DEV-Mantic, early DEV-Nble, I asked a LaunchPad Question asking if anyone knew what the newly used partitioner "was". It was asked again as issues at the Canonical GitHub ubuntu-desktop-installer, by many users in many forms. Those all went unaswered.

Also in the Noble-Dev Cycle, the Canonical ubuntu-desktop-installer Team had a major change of guard, where when they took over, changed the name from ubuntu-desktop-installer, to ubuntu-desktop-privision. It the same installer with the same partitioner.

it's too late for Lunar. That is no longer in support. Mantic is still in support for a little over two months, but in the last bug report, Lanchpad close the bug report, just before the release, saying they would not fix it for Mantic. Saying that Noble no longer used ubuntudesktop installer, but rather used ubuntu-desktop-provision. Remebr that Mantic is still in official support at this moment.

???

It's my understanding that they are the same snap, with the same underlying code. Just changing the buzzwork of the name did not make it completely different. AND, it still has this same problem. It didn't go away in the name change. It didn't go away, and nor am I going away.

Fine. Here is a "new" Bug Report filed against current Ubuntu Noble 24.04 LTS.

I will document it fully. I will recruit all the many Users on the Forum that consider this a major bug with the installer.

Note that some long-time, dedicated Users are saying if this is not fixed, then they and who they support, will not go to Ubuntu 24.04, but rather change back to Debian, who's subiquity installer still recognizes those... Not fixing this will cost us dearly.

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

>> apport-collect 2065236
does not want to summit info. The popup says "no information collected...

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

This is the test case:

mafoelffen@ubuntu-desktop-provision:~$ lsblk -e7 -o name,label,size,fstype,mountpoint
NAME LABEL SIZE FSTYPE MOUNTPOIN
sda 80G
├─sda1 EFI 1000M vfat
├─sda2 2.9G ext4 /boot
└─sda3 76.1G LVM2_member
  ├─vg--root--disk-lv--swap 8G swap [SWAP]
  └─vg--root--disk-lv--root 54.5G btrfs /
sdb 40G
└─sdb1 40G LVM2_member
  └─vg--home-lv--home 32G btrfs /home
sdc 10G
└─sdc1 ubuntu-server:0 10G linux_raid_member
  └─md0 20G
    └─md0p1 20G btrfs /data
sdd 10G
└─sdd1 ubuntu-server:0 10G linux_raid_member
  └─md0 20G
    └─md0p1 20G btrfs /data
sde 10G
└─sde1 ubuntu-server:0 10G linux_raid_member
  └─md0 20G
    └─md0p1 20G btrfs /data
sdf 8G
└─sdf1 538M vfat /boot/efi
sr0 Ubuntu-Server 24.04 LTS amd64 2.6G iso9660

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

With Ubuntu Server 24.04 amd64 ISO:

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

Next on a reinstall and forever after:

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

With Desktop, you can see that the system of the Live Environment sees all:

Revision history for this message
Mike Ferreira (mafoelffen) wrote :

But the partitioner of the Desktop Installer sees nothing:

Revision history for this message
Allen James (aljames) wrote :

I am a newer user of Ubuntu, < 5yrs. I value an installer that can see my LVs. I don’t have to have the installer do the creative partitioning for me, it just needs to see the LVs and allow me to install into them, by choosing where system mountpoints should live, according to my needs. Using 22.04 for now and until the 24.04 installer is less rigid. Fingers crossed.

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

This includes part of this failure: https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2058511

But this bug report is not a duplicate of this bug report, as that bug report only deals with a problem with LVM2 alone,. Not the bigger problem where there is much more wrong, and recognizes much less than thought there.

This could possibly be merged with that bug report... If the bigger problem is included within that bug report, by it's scope expanded.

Revision history for this message
Mr Steven Rogers (gezzer) wrote :

i have found that the 24.04 installer will not even do a clean install on or over a LUK encrypted partition. the Linux encrypted partition has to be completely wiped/formatted before a new install of 24.04 will complete and be usable. Steve Rogers Ubuntu User ..

Revision history for this message
Rick S (1fallen) wrote :

I found it same as geezer but on ZFS had to wipe it clean before a successful install https://ubuntuforums.org/showthread.php?t=2497029

Kind of a show stopper for my production machines :(

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

For #10, I had/submitted a bug report in DEV-Mantic (https://bugs.launchpad.net/subiquity/+bug/2038856), where I found that with pre-existing ZFS, the code for erasing/clean wasn't working as desired. The part of the code that fails and throws an error is "delete_zpool".

I don't see They made a patch to that, but if it doesn't recognize that, then it won't be triggered (right)? That seems to fall into this same, maybe.

That is where I cam up with the work-around/recommendation to zero the disk manually, and that "that" should probably be added as a patch to the code.
>>>
DISK=<unique_disk_name>
sudo blkdiscard -f $DISK
sudo wipefs -a $DISK
sudo sgdisk --zap-all $DISK
>>>
You might want to add yourself to that Report and note that it is still a problem with the Noble Installer. That would bump that report into "confirmed" status, and get that going again.

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

Sidenote: You will note on the 24.04 Server Edition ISO, I had to add another disk, to get around another (what I consider as a Bug) on that installer/partitioner. It see's the storage containers/file structures, BUT...

If you have a pre-existing EFI partition, it will not let you choose that disk as the boot disk, unless it is on a disk other than the root-disk. And once selected/set, it will not allow you to change your choice, it is locked in.

That has been a long-standing Bug on that, but I need to file that separately, as that is a different installer and partitioner.

Revision history for this message
Excalibur (dev-arthurk) wrote :

As a System76/Pop_OS user this is particularly a frustrating use case, as Pop_OS is still based on 22.04 until their new "Cosmic Desktop' is taking its time to be released. So I'm desiring to Dual Boot with a modern Ubuntu install, especially as a KDE user, given how much it has evolved since 22.04.

Pop_OS was OEM installed with LVM and LUKS, so what should be a very simple two-distro dual boot into a second VG is looking to be a very time-consuming endeavor.

Revision history for this message
DiagonalArg (diagonalarg) wrote :

The case of not being able to install over LUKS containers has been reported:

https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2064193
https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2060678

Also, Debian now allows encryption of /boot out of the box. As that's not available with Ubuntu, I have to prepare my disks in advance, which is where I hit this bug.

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

other test case + screenshot of missing LVMs in installer

ubuntu@ubuntu:~$ lsblk -e7 -o name,label,size,fstype,mountpoint
NAME LABEL SIZE FSTYPE MOUNTPOINT
sda 465.8G
└─sda1 465.7G LVM2_member
  └─vgvirtual-vm 350G ext4
sdb 447.1G
└─sdb1 447.1G LVM2_member
sdc 894.3G
└─sdc1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  └─vgrecover-swap 36G swap
sdd 894.3G
└─sdd1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  └─vgrecover-swap 36G swap
sde 894.3G
└─sde1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  ├─vgrecover-swap 36G swap
  └─vgrecover-spotcache spotcache 97.7G ext4
sdf 894.3G
└─sdf1 894.2G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  ├─vgrecover-swap 36G swap
  └─vgrecover-spotcache spotcache 97.7G ext4
sdg 894.3G
└─sdg1 893.6G LVM2_member
  ├─vgrecover-recover 3.9T ext4
  └─vgrecover-swap 36G swap
sdh Ubuntu 24.04 LTS amd64 28.9G iso9660
├─sdh1 Ubuntu 24.04 LTS amd64 5.7G iso9660 /cdrom
├─sdh2 ESP 5M vfat
├─sdh3 300K
└─sdh4 writable 23.2G ext4 /var/crash
nvme0n1 238.5G
├─nvme0n1p1 64M
├─nvme0n1p2 efi 256M vfat
└─nvme0n1p3 237.9G LVM2_member
  ├─vgdev-root 83G ext4
  └─vgdev-home 150G ext4

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

see screenshot

you can see that the vgdev-home LVM is missing so can't fresh install without erasing all the users personal data or restoring for hours on-end from a backup.

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

please see the screenshot for reference

The previous LTS version of Ubuntu 22.04.4 shows the correct LVM volumes/partitions installed on the machine

Revision history for this message
Damiön la Bagh (kat-amsterdam) wrote :

Curious as to how are people who use the LVM install option from the Advanced Options menu in the 24.04 installer are supposed to reinstall their machine without a complete wipe of their data?

Revision history for this message
Cliff (klfjoat) wrote :

My Ubuntu upgrade from 23.10 to 24.04 got hosed. So I'll just do what I've done half a dozen times before on Ubuntu and install the new version over the borked install, right?

Wrong. This YEARS-LONG bug is preventing me. So I need to do a rescue backup (to supplement my normal backups), wipe, and reinstall.

Might as well just move to a different distro while I'm at it.

Revision history for this message
Will (mccompunerd) wrote :

I was hoping to install Ubuntu 24.04 inside an existing Linux Mint LVM but lo and behold ran into this issue. For crying out loud can we please get the ability to see existing(and mounted) partitions inside the installer?

Revision history for this message
Tim Bowden (tbowden) wrote :

This bug has seriously complicated the process of upgrading multiple systems for me. I'm using LVM on a number of older desktop systems and I've had to push back the upgrade as a result.

Revision history for this message
Seb Bonnard (sebma) wrote (last edit ):

I have both tried the Ubuntu 24.04 and 23.10.1 GUI installers and it does see not my LVM structures on /dev/sda2 :

ubuntu@ubuntu:~$ sudo pvs
  PV VG Fmt Attr PSize PFree
  /dev/sda2 VG_ALL lvm2 a-- 237.47g 112.47g
ubuntu@ubuntu:~$ sudo lvs
  LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  home VG_ALL -wi-a----- 100.00g
  rootFS VG_ALL -wi-a----- 25.00g
ubuntu@ubuntu:~$ xprop _NET_WM_PID
_NET_WM_PID(CARDINAL) = 13799
ubuntu@ubuntu:~$ ps -fp 13799
UID PID PPID C STIME TTY TIME CMD
ubuntu 13799 13763 34 14:41 ? 00:00:04 /snap/ubuntu-desktop-installer/1267/bin/ubuntu_desktop_installer
ubuntu@ubuntu:~$

What is the Ubuntu Minotaur legacy installer and how do I launch it ?

Related : https://bugs.launchpad.net/ubuntu-desktop-provision/+bug/2058511

Revision history for this message
Michael Lippert (mlippert255) wrote :

I was hoping that this was fixed for 24.04.1.
I just tried to install Kubuntu 24.04.1 on a system that is currently running 22.04 as I wanted to dual boot w/ the old install and the new fresh install of 24.04.

I booted the live 24.04, and from try Kubuntu I used cryptsetup to decrypt the existing encrypted LVM partition. I created new LVs for the new 24.04 root install, and a new unencrypted partition for /boot distinct from the one used for 22.04.

I selected manual partitioning and specified the current /boot/efi partition the new /boot partition and the new LV / partition.

The install failed with
> The bootloader could not be installed. The installation command <pre>grub-mkconfig -o /boot/grub/grub.cfg</pre> returned error code 1.

When I returned to the live 24.04, the encrypted partition was no longer decrypted and I had to run cryptsetup to decrypt it again.

When I mounted the new 24.04 root partition, it was empty.

So the install completely failed.

Is there any other installer that can accomplish getting Kubuntu 24.04 dual booted in this scenario, text based or legacy gui would be fine if it works?

Revision history for this message
Daniel Tang (daniel-z-tg) wrote :

Is this fixed for 24.10? If not, I hope this gets fixed for 25.04.

My use case is that I have a new 1TB SSD. I had formatted 800GB as a compile partition for use by the existing installation. Now I want the 200GB to be a quickly-booting Ubuntu installation, but the installer threatens to either erase my existing partition or refuse to set up encryption.

Revision history for this message
William Paul Liggett (junktext-0) wrote :

I ran into this bug report trying to set up 24.04.1 with LUKS/LVM and Windows 11 with BitLocker as I, too, had issues. To respond to Daniel, I'm not sure how this affects 24.10. However, some potential good news may be coming with the future Ubuntu 25.04, according to this Discourse thread about its roadmap:

https://discourse.ubuntu.com/t/ubuntu-desktop-25-04-part-1-the-plucky-roadmap/49621#p-123134-further-refinements-to-the-desktop-installer

Quoted from the page:

Further refinements to the desktop installer

For this release, we’re returning our focus to the desktop installer to improve the user journey with a specific emphasis around the dual-boot experience.

Our goal is to provide additional messaging around the presence of other OS’s on your device and refine the scenarios around ‘Install into free space’ and ‘erase and replace an existing Ubuntu installation’.

We also want to provide out-of-the-box encryption options for dual-boot instals as well as more graceful handling of Windows BitLocker encryption scenarios. Currently we prompt users to turn off BitLocker encryption on Windows in all dual-boot scenarios, however there are circumstances where a user might have a non-encrypted drive or unencrypted partition that does not require this.

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

I am encountering this issue with ubuntu-desktop-bootstrap (version 0+git.3dcb29d80, rev 366) in Ubuntu 25.04.

My situation is much simpler than above, because my disk only has four regular partitions.
I am NOT using LVM, and I still experience this issue.

I am unable to assign mount points for two of them. In the partition table shown below, the device /dev/nvme0n1p3 is where I want to install Ubuntu, and I am unable to select "/" as the mount point.

I have attached the error log from the console. (See Error Log Ubuntu 25.04 (2025-06-07-01).txt)
I have also attached a screen shot of the Manual partition page from the installer, showing the partitions. (See Screenshot From 2025-06-08 02-58-30.png).

Here is my partition table...

$ sudo fdisk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: THNSN51T02DUK TOSHIBA
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BD0CCC72-6828-461F-89F6-A66F65B08807

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 405503 403456 197M EFI System
/dev/nvme0n1p2 405504 47282175 46876672 22.4G Linux filesystem
/dev/nvme0n1p3 47282176 242595839 195313664 93.1G Linux filesystem
/dev/nvme0n1p4 242595840 2000409230 1757813391 838.2G Linux filesystem

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

I have attached the error log from the console (see Comment #26).

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

Screen shot of the Manual partition page from the installer, showing the partitions. (See Comment #26).

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

Subiquity Server Debug Log from Ubuntu 25.04 (See comment #26)

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

Subiquity Server Info Log from Ubuntu 25.04 (See comment #26)

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

Subiquity Traceback from Ubuntu 25.04 (See comment #26)

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

Telemetry from Ubuntu 25.04 (See comment #26)

Revision history for this message
PJSingh5000 (pjsingh5000) wrote :

Here is the relevant section of the log, showing the error.

The installer thinks we are trying to change the name of a partition, when all we are trying to do is assign the mount point to the partition.

2025-06-08 04:13:38,138 DEBUG subiquity.server.controllers.filesystem:1498 ModifyPartitionV2(disk_id='disk-nvme0n1', partition=Partition(size=24000856064, number=2, preserve=True, wipe=None, annotations=['existing', 'already formatted as ext4', 'not mounted'], mount='/mnt/install', format=None, grub_device=False, boot=False, os=None, offset=207618048, estimated_min_size=8565817344, resize=None, path='/dev/nvme0n1p2', name='Linux filesystem', is_in_use=False, effective_mount=None, effective_format='ext4', effectively_encrypted=False))
2025-06-08 04:13:38,139 DEBUG root:38 finish: subiquity/Filesystem/v2_edit_partition_POST: SUCCESS: 500 Traceback (most recent call last):
  File "/snap/ubuntu-desktop-bootstrap/366...
2025-06-08 04:13:38,139 DEBUG subiquity.server.server:573 request to /storage/v2/edit_partition failed with status 500: "edit_partition does not support changing partition name"
Traceback (most recent call last):
  File "/snap/ubuntu-desktop-bootstrap/366/bin/subiquity/subiquity/common/api/server.py", line 165, in handler
    result = await implementation(**args)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/snap/ubuntu-desktop-bootstrap/366/bin/subiquity/subiquity/server/controllers/filesystem.py", line 1529, in v2_edit_partition_POST
    raise ValueError(
ValueError: edit_partition does not support changing partition name
2025-06-08 04:13:38,139 DEBUG subiquity.common.errorreport:386 generating crash report
2025-06-08 04:13:38,140 INFO subiquity.common.errorreport:412 saving crash report 'request to /storage/v2/edit_partition crashed with ValueError' to /var/crash/1749356018.139873981.server_request_fail.crash

Revision history for this message
Paddy Landau (paddy-landau) wrote :

I believe (please correct me if I'm wrong) that these four bugs are all actually about the same error, and should be considered duplicates:

Bug #2014977
Bug #2058511
Bug #2060678
Bug #2065236 (this bug)

This bug is the best worded of the four, as it applies to the overarching problem rather than specifically LVM and LUKS.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.