Update of zfs-linux fails
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
zfs-linux (Ubuntu) |
Expired
|
Undecided
|
Unassigned |
Bug Description
I use Virtualbox for Ubuntu 19.10. I have an installation with two disks, one to boot from zfs and one to boot from ext4. The last one has zfs also installed, but the update of zfs failed on many "directories not empty" messages. I'm 74, but still learning and one day ago I learned that initramfs could take care of the mount, so I changed all canmount=on to canmount=noauto. Afterwards the update of zfs succeeded :)
When I booted again from the zfs disk, the boot went OK, but I ended up in a login loop. The login loop disappeared, when I did set canmount=on again for the user dataset. But that dataset has been one of the error messages, while the zfs update failed :(
So i see two issues to be solved for 19.10 and 20.04.
- for rpool/ROOT use canmount=noauto instead of canmount=on
- for rpool/USERDATA. Good luck, since canmount=on is needed for the ZFS system and canmount=off or noauto is needed for the other system.
Some time ago I created another user on that ZFS system, a user without an own dataset and that user did not suffer from a login loop. That user can be found in /home/user2, user2 being a normal folder, I hope that can be used.
ProblemType: Bug
DistroRelease: Ubuntu 19.10
Package: zfsutils-linux 0.8.1-1ubuntu14.1
ProcVersionSign
Uname: Linux 5.3.0-23-generic x86_64
NonfreeKernelMo
ApportVersion: 2.20.11-0ubuntu8.2
Architecture: amd64
CurrentDesktop: ubuntu:GNOME
Date: Sat Nov 16 09:58:03 2019
InstallationDate: Installed on 2019-04-26 (203 days ago)
InstallationMedia: Ubuntu 19.10 "Eoan EANIMAL" - Alpha amd64 (20190425)
SourcePackage: zfs-linux
UpgradeStatus: No upgrade log present (probably fresh install)
modified.
tags: | added: zfs |
Changed in zfs-linux (Ubuntu): | |
status: | New → Incomplete |
Which specific filesystems are failing to mount?
Typically, this situation occurs because something is misconfigured, so the mount fails, so files end up inside what should otherwise be empty mountpoint directories. Then, even once the original problem is fixed, the non-empty directories prevent ZFS from mounting on them. We already know you had such an underlying issue, so there is a high likelihood that this is what is happening here.
I’m on mobile now, but try something like: mountpoint, mounted POOLNAME
zfs get -r canmount,