request to /storage/v2/guided crashed with AttributeError (FilesystemController' object has no attribute 'delete_zpool)

Bug #2038856 reported by Mike Ferreira
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
subiquity
Triaged
Undecided
Unassigned

Bug Description

This may be multi-faucetted. Happened after in "Try" Desktop, Text Editor would not Open / Display files... And crashed Xorg. That crash is valid, but for some reason will not submit. That Crash would not open apport, then Installer crashed on Advanced: ZFS... so do not know if this is really valid or not. Chain of events previous left system unstable.

But since testing the dailies, submitted this subsequent Bug anyways.

ProblemType: Bug
DistroRelease: Ubuntu 23.10
Package: subiquity (unknown)
ProcVersionSignature: Ubuntu 6.5.0-5.5-generic 6.5.0
Uname: Linux 6.5.0-5-generic x86_64
NonfreeKernelModules: zfs
ApportVersion: 2.27.0-0ubuntu4
Architecture: amd64
CasperMD5CheckResult: pass
CasperVersion: 1.486
CloudArchitecture: x86_64
CloudID: nocloud
CloudName: unknown
CloudPlatform: nocloud
CloudSubPlatform: seed-dir (/var/lib/cloud/seed/nocloud)
Date: Mon Oct 9 17:48:31 2023
DesktopInstallerRev: 1247
ExecutablePath: /snap/ubuntu-desktop-installer/1247/bin/subiquity/subiquity/cmd/server.py
InterpreterPath: /snap/ubuntu-desktop-installer/1247/usr/bin/python3.10
LiveMediaBuild: Ubuntu 23.10 "Mantic Minotaur" - Daily amd64 (20231004)
Lsusb:
 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
 Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Lsusb-t:
 /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M
 /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M
     |__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M
MachineType: QEMU Standard PC (Q35 + ICH9, 2009)
ProcAttrCurrent: snap.hostname-desktop-installer.subiquity-server (complain)
ProcCmdline: /snap/hostname-desktop-installer/1247/usr/bin/python3.10 -m subiquity.cmd.server --use-os-prober --storage-version=2 --postinst-hooks-dir=/snap/hostname-desktop-installer/1247/etc/subiquity/postinst.d
ProcEnviron:
 LANG=en_US.UTF-8
 LD_LIBRARY_PATH=<set>
 PATH=(custom, no user)
ProcKernelCmdLine: BOOT_IMAGE=/casper/vmlinuz layerfs-path=minimal.standard.live.squashfs --- quiet splash
Python3Details: /usr/bin/python3.11, Python 3.11.5, python3-minimal, 3.11.4-5
PythonDetails: N/A
SnapChannel: latest/stable
SnapRevision: 1247
SnapUpdated: False
SnapVersion: 0+git.4e5302e1
SourcePackage: subiquity
Title: request to /storage/v2/guided crashed with AttributeError
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 02/06/2015
dmi.bios.release: 0.0
dmi.bios.vendor: EFI Development Kit II / OVMF
dmi.bios.version: 0.0.0
dmi.chassis.type: 1
dmi.chassis.vendor: QEMU
dmi.chassis.version: pc-q35-6.2
dmi.modalias: dmi:bvnEFIDevelopmentKitII/OVMF:bvr0.0.0:bd02/06/2015:br0.0:svnQEMU:pnStandardPC(Q35+ICH9,2009):pvrpc-q35-6.2:cvnQEMU:ct1:cvrpc-q35-6.2:sku:
dmi.product.name: Standard PC (Q35 + ICH9, 2009)
dmi.product.version: pc-q35-6.2
dmi.sys.vendor: QEMU

Revision history for this message
Mike Ferreira (mafoelffen) wrote :
Revision history for this message
Mike Ferreira (mafoelffen) wrote :
Dan Bungert (dbungert)
summary: request to /storage/v2/guided crashed with AttributeError
+ (FilesystemController' object has no attribute 'delete_zpool)
Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

Oh well. This is valid. Restarted Installer. Upgraded to latest git to 0+git.bb3cfa22. Chose ZFS. On proceeding, got same crash error.

If I reboot the image, started the installer, chose the same options skipping the step on updating the installer to the newest package, then it proceeded successfully. That was 0+git.9403be42.

So the crash is associated with the new "0+git.bb3cfa22". The installer version o the ISO is good. The current git is broken.

Revision history for this message
Dan Bungert (dbungert) wrote :

Hi Mike, thanks for the report.

I'm investigating this bug today. I haven't been able to make it happen yet, after a few tries with various configurations and includes using your listed version. From the logs it has something to do with multiple installs, as there is an existing zpool that is being wiped out for the new install.

If you have further knowledge you can share about the exact filesystem choice that would help. Also I would appreciate a raw copy of the logs - some of the final info that I want got cut off at the time of the report. If you were using a USB key to run the installer the logs may already be present on the 4th partition.

Changed in subiquity:
status: New → Incomplete
Revision history for this message
Dan Bungert (dbungert) wrote :

To clarify the "raw copy" comment, the logs here look like they cut off some of the info, so a tarball of them either from /var/log/installer or the existing copy or copies that might be on your usb key would be beneficial.

Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

I'll recreate it after I get some coffee. Just woke up. The cat is demanding when he thinks it's time for him to be fed!

I had a successful install test after that, so will delete what is on there and zap the disk previous to that.

***
What would you like in what form? I recreated it and have the VM paused.

Yes, you were right. Because I was researching where ubuntu-desktop-installer is keeping track of the information, vars and assertions... Where it fails and how to recreate it was:
Started the installer. Upgraded to the new git, which restarted the installer. Selected "Erase All: Advanced-- Experimental ZFS" > Next > Confirmed the partitioning / Install.

Interrupted the installer, by closing it, during the Locale TZ panel. By that time, the scripts had already partitioned from "wherever" to the drive, created the pools and datasets onto the disk.

Note: That was what I am looking for, that "where from", which I still do not know, nor has anyone been able to tell me which partitioner Flutter is actually using to do that... I have questions on those both at the ubuntu-desktop-installer GitHub, and here at Launchpad... To be able to intercept and manually change / tweak the sizes of the EFI & boot/bpool partitions during a ZFS install. (Users are wanting that ability.)

Continuing-- restarted the installer from the desktop... Selected the same options. At "Erase All" no options were detected yet, so had to reselect "ZFS", next, confirm... Crash at (translation) of pools already existed... It crashed where it could not zpool destroy (delete or erase) existing pools if they already existed on disk.

I guess the thing to do there (logically) would be in Erase All > Advanced = ZFS, Before writing to disk... Check if pools already exist on disk. If they do, zap the disk and add a new partition table. Then write the new.

The more detailed way would be to add the logic to ID the existing pools and use zpool destroy to delete the existing pools there.

That is what I would do for that condition. "Zap All" to me, since "Erase All" implies starting over, seems to be the easiest way to get around that. But I do not maintain or contribute to that code.

In the Blueprints, we are told that that logic is going to be added to the "Manual Partitioning" option > Inside the new Partitioner to be able to manually tweak for ZFS... I don't see that existing yet. I was hoping it would be in time for the 24.04 Dev Cycle to start testing on that... I am committed to make that work for Ubuntu. I have stagnant Bugs filed on that since 23.04.

Dan Bungert (dbungert)
Changed in subiquity:
status: Incomplete → Triaged
Revision history for this message
Mike Ferreira (mafoelffen) wrote (last edit ):

I just came back to this to check on it. I see it is stalled, but the problem still exits... This is still broken in in the Noble LTS ISO... What do you need to get this going again?

My work-around/recommends to people is to manually zero the disk:
>>>
DISK=<unique_disk_name>
sudo blkdiscard -f $DISK
sudo wipefs -a $DISK
sudo sgdisk --zap-all $DISK
>>>
That works for pre-ezisting ZFS, LVM, and mdadm.

information type: Private → Public
Revision history for this message
Rick S (1fallen) wrote :

As Mike has stated the only way a 24.04 Noble zfs-root would install without a crash is to wipe the drive, with no partition format. Just a Blank Drive.

Revision history for this message
Ubuntu QA Website (ubuntuqa) wrote :

This bug has been reported on the Ubuntu ISO testing tracker.

A list of all reports related to this bug can be found here:
http://iso.qa.ubuntu.com/qatracker/reports/bugs/2038856

tags: added: iso-testing
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.