Encrypted swap won't load on 20.04 with zfs root

Bug #1875577 reported by John Gray on 2020-04-28
50
This bug affects 8 people
Affects Status Importance Assigned to Milestone
zfs-linux (Ubuntu)
Low
Colin Ian King

Bug Description

root@eu1:/var/log# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04 LTS
Release: 20.04
Codename: focal

root@eu1:/var/log# apt-cache policy cryptsetup
cryptsetup:
  Installed: (none)
  Candidate: 2:2.2.2-3ubuntu2
  Version table:
     2:2.2.2-3ubuntu2 500
        500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

OTHER BACKGROUND INFO:
======================

1. machine has 2 drives. each drive is partitioned into 2 partitions, zfs and swap

2. Ubuntu 20.04 installed on ZFS root using debootstrap (debootstrap_1.0.118ubuntu1_all)

3. The ZFS root pool is a 2 partition mirror (the first partition of each disk)

4. /etc/crypttab is set up as follows:

swap /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2 /dev/urandom swap,cipher=aes-xts-plain64,size=256
swap /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933-part2 /dev/urandom swap,cipher=aes-xts-plain64,size=256

WHAT I EXPECTED
===============

I expected machine would reboot and have encrypted swap that used two devices under /dev/mapper

WHAT HAPPENED INSTEAD
=====================

On reboot, swap setup fails with the following messages in /var/log/syslog:

Apr 28 17:13:01 eu1 kernel: [ 5.360793] systemd[1]: cryptsetup.target: Found ordering cycle on <email address hidden>/start
Apr 28 17:13:01 eu1 kernel: [ 5.360795] systemd[1]: cryptsetup.target: Found dependency on systemd-random-seed.service/start
Apr 28 17:13:01 eu1 kernel: [ 5.360796] systemd[1]: cryptsetup.target: Found dependency on zfs-mount.service/start
Apr 28 17:13:01 eu1 kernel: [ 5.360797] systemd[1]: cryptsetup.target: Found dependency on zfs-load-module.service/start
Apr 28 17:13:01 eu1 kernel: [ 5.360798] systemd[1]: cryptsetup.target: Found dependency on cryptsetup.target/start
Apr 28 17:13:01 eu1 kernel: [ 5.360799] systemd[1]: cryptsetup.target: Job <email address hidden>/start deleted to break ordering cycle starting with cryptsetup.target/start
. . . . . .
Apr 28 17:13:01 eu1 kernel: [ 5.361082] systemd[1]: Unnecessary job for /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2 was removed

Also, /dev/mapper does not contain any swap devices:

root@eu1:/var/log# ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Apr 28 17:13 control
root@eu1:/var/log#

And top shows no swap:

MiB Swap: 0.0 total, 0.0 free, 0.0 used. 63153.6 avail Mem

Steve Langasek (vorlon) wrote :

I don't know exactly why this manifests as a dependency loop, but your /etc/crypttab is certainly wrong; the first column of /etc/crypttab is the target device name, and you cannot have two separate source encrypted devices map to the same decrypted device name.

If you give the two devices separate names (e.g. swap1, swap2), does this work for you?

If not this should probably be reassigned to systemd, since those systemd units are created by a systemd generator and not by the cryptsetup package.

Changed in cryptsetup (Ubuntu):
status: New → Incomplete
John Gray (johngray) wrote :

Thanks, and sorry about that error. I corrected the crpyttab to swap1 and swap2 as suggested, but still got a similar error message:

Apr 30 18:06:38 eu1 kernel: [ 5.786143] systemd[1]: systemd-random-seed.service: Found ordering cycle on zfs-mount.service/start
Apr 30 18:06:38 eu1 kernel: [ 5.789051] systemd[1]: systemd-random-seed.service: Found dependency on zfs-load-module.service/start
Apr 30 18:06:38 eu1 kernel: [ 5.796988] systemd[1]: systemd-random-seed.service: Found dependency on cryptsetup.target/start
Apr 30 18:06:38 eu1 kernel: [ 5.806521] systemd[1]: systemd-random-seed.service: Found dependency on <email address hidden>/start
Apr 30 18:06:38 eu1 kernel: [ 5.815563] systemd[1]: systemd-random-seed.service: Found dependency on systemd-random-seed.service/start
Apr 30 18:06:38 eu1 kernel: [ 5.824697] systemd[1]: systemd-random-seed.service: Job zfs-mount.service/start deleted to break ordering cycle starting with systemd-random-seed.service/start

affects: cryptsetup (Ubuntu) → systemd (Ubuntu)
Steve Langasek (vorlon) on 2020-05-01
Changed in systemd (Ubuntu):
status: Incomplete → New
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in systemd (Ubuntu):
status: New → Confirmed
Dan Streetman (ddstreet) on 2020-05-05
affects: systemd (Ubuntu) → zfs-linux (Ubuntu)
Dan Streetman (ddstreet) wrote :

This isn't a systemd bug, it's a bug in zfs's zfs-mount.service, adding in this upstream commit:
https://github.com/openzfs/zfs/commit/8ae8b2a1445bcccee1bb8ee7d4886f30050f6f53

That orders zfs-mount.service *before* systemd-random-seed.service, while systemd's generator for cryptsetup orders *after* systemd-random-seed.service:
https://github.com/systemd/systemd/blob/master/src/cryptsetup/cryptsetup-generator.c#L193

zfs-mount.service is then caught in a dependency loop:
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found ordering cycle on zfs-import.target/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on zfs-import-cache.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on cryptsetup.target/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on <email address hidden>/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on systemd-random-seed.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on zfs-mount.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Job zfs-import.target/start deleted to break ordering cycle starting with zfs-mount>
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found ordering cycle on zfs-load-module.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on cryptsetup.target/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on <email address hidden>/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on systemd-random-seed.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on zfs-mount.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Job zfs-load-module.service/start deleted to break ordering cycle starting with zfs>

@didrocks, I think you added the zfs change upstream, maybe it needs to be adjusted to handle it some other way? In general, services shouldn't order themselves before systemd-random-seed without very careful consideration.

Richard Laager (rlaager) wrote :

This is a tricky one because all of the dependencies make sense in isolation. Even if we remove the dependency added by that upstream OpenZFS commit, given that modern systems use zfs-mount-generator, systemd-random-seed.service is going to Require= and After= var-lib.mount because of its RequiresMountsFor=/var/lib/systemd/random-seed. The generated var-lib.mount will be After=zfs-import.target because you can't mount a filesystem without importing the pool. And zfs-import.target is After= the two zfs-import-* services. Those are after cryptsetup.target, as you might be running your pool on top of LUKS.

Mostly side note: it does seem weird and unnecessary that zfs-load-module.service has After=cryptsetup.target. We should probably remove that. That is coming from debian/patches/2100-zfs-load-module.patch (which is what provides zfs-load-module.service in its entirety).

One idea here would be to eliminate the After=cryptsetup.target from zfs-import-{cache,scan}.service and require that someone add them via a drop-in if they are running on LUKS. However, in that case, they'll run into the same problem anyway. So that's not really a fix.

Another option might be to remove the zfs-mount.service Before=systemd-random-seed.service and effectively require the use of the mount generator for Root-on-ZFS setups. That is what the Ubuntu installer does and what the Root-on-ZFS HOWTO will use for 20.04 anyway. (I'm working on it actively right now.) Then, modify zfs-mount-generator to NOT After=zfs-import.target (and likewise for Wants=) if the relevant pool is already imported (and likewise for the zfs-load-key- services). Since the rpool will already be imported by the time zfs-mount-generator runs, that would be omitted.

I've attached an *untested* patch to that effect. I hope to test this yet tonight as I test more Root-on-ZFS scenarios, but no promises.

Richard Laager (rlaager) wrote :

John Gray: Everything else aside, you should mirror your swap instead of striping it (which I think is what you're doing). With your current setup, if a disk dies, your system will crash.

The attachment "2150-fix-systemd-dependency-loops.patch" seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
Richard Laager (rlaager) wrote :

I didn't get a chance to test the patch. I'm running into unrelated issues.

Didier Roche (didrocks) wrote :

Your patch makes sense Richard and I think it will be a good upstream candidates. In all approaches you proposed, this is my prefered one because this is the most flexible IMHO.

Tell me when you get a chance to test it and maybe John, you can confirm this fixes it for you?

John Gray (johngray) wrote :
Download full text (3.6 KiB)

Hi Richard and Dider, thanks - I have set up encrypted swap on mdraid1 instead. It works but is subject to the same cycle issue - sometimes swap doesn't load, sometimes the boot zfs pool won't mount.

I went to apply the patch, but my system doesn't seem to have the two files that are referenced,
etc/systemd/system-generators/zfs-mount-generator.in
and
/etc/systemd/system/zfs-mount.service.in
and so the patch command failed.

I have probably misunderstood something, sorry... I'm way out of my depth here. FWIW here are the contents of the directories for the missing files:

root@eu1 ~ ls -l /etc/systemd
total 94
-rw-r--r-- 1 root root 1042 Apr 22 19:04 journald.conf
-rw-r--r-- 1 root root 1042 Apr 22 19:04 logind.conf
drwxr-xr-x 2 root root 2 Apr 22 19:04 network
-rw-r--r-- 1 root root 584 Apr 2 04:23 networkd.conf
-rw-r--r-- 1 root root 529 Apr 2 04:23 pstore.conf
-rw-r--r-- 1 root root 634 Apr 22 19:04 resolved.conf
-rw-r--r-- 1 root root 790 Apr 2 04:23 sleep.conf
drwxr-xr-x 15 root root 21 May 3 13:08 system
-rw-r--r-- 1 root root 1759 Apr 22 19:04 system.conf
-rw-r--r-- 1 root root 604 Apr 22 19:04 timesyncd.conf
drwxr-xr-x 3 root root 3 May 2 18:24 user
-rw-r--r-- 1 root root 1185 Apr 22 19:04 user.conf

(system-generators is missing)

root@eu1 ~ ls -l /etc/systemd/system/
total 90
lrwxrwxrwx 1 root root 44 Apr 27 17:56 dbus-org.freedesktop.resolve1.service -> /lib/systemd/system/systemd-resolved.service
lrwxrwxrwx 1 root root 36 Apr 27 18:15 dbus-org.freedesktop.thermald.service -> /lib/systemd/system/thermald.service
lrwxrwxrwx 1 root root 45 Apr 27 17:56 dbus-org.freedesktop.timesync1.service -> /lib/systemd/system/systemd-timesyncd.service
drwxr-xr-x 2 root root 3 Apr 27 17:56 default.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 emergency.target.wants
drwxr-xr-x 2 root root 3 Apr 27 17:56 getty.target.wants
drwxr-xr-x 2 root root 4 May 3 13:08 mdmonitor.service.wants
drwxr-xr-x 2 root root 21 Apr 28 19:21 multi-user.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 rescue.target.wants
drwxr-xr-x 2 root root 11 May 3 13:08 sockets.target.wants
lrwxrwxrwx 1 root root 31 Apr 27 18:02 sshd.service -> /lib/systemd/system/ssh.service
drwxr-xr-x 2 root root 10 May 3 13:08 sysinit.target.wants
lrwxrwxrwx 1 root root 35 Apr 27 17:56 syslog.service -> /lib/systemd/system/rsyslog.service
drwxr-xr-x 2 root root 10 May 2 18:29 timers.target.wants
lrwxrwxrwx 1 root root 35 Apr 27 18:05 zed.service -> /lib/systemd/system/zfs-zed.service
drwxr-xr-x 2 root root 3 Apr 27 18:05 zfs-import.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 zfs-mount.service.wants
drwxr-xr-x 2 root root 8 Apr 27 18:05 zfs.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 zfs-volumes.target.wants

(zfs-mount.service.in is missing)

The system was installed with debootstrap:

dpkg --install debootstrap_1.0.118ubuntu1_all.deb
debootstrap --arch=amd64 focal /mnt http://archive.ubuntu.com/ubuntu/

I did not install any zfs packages separately - the system only has what debootstrap gave it:

root@eu1 ~ dpkg -l | grep zfs
ii libzfs2linux 0.8.3-1ubuntu12 amd64 OpenZFS file...

Read more...

John Gray (johngray) wrote :

...actually going over my notes again, I actually did install grub-related zfs packages separately from debootstrap:

apt install --yes zfsutils-linux
apt install --yes zfs-initramfs

Didier Roche (didrocks) wrote :

On an installed packaged system, the files are in different directories (and don’t have the .in extension as they have been built with the prefix replacement). Their names and locations are:

/lib/systemd/system/zfs-mount.service
/lib/systemd/system-generators/zfs-mount-generator

John Gray (johngray) wrote :

I changed the path names in the patch file and it applied. I rebooted and it worked! :-)

May 5 23:06:33 eu1 kernel: [ 6.480412] Adding 135128956k swap on /dev/mapper/md1swap. Priority:-2 extents:1 across:135128956k SSFS

I have all my ZFS filesystems mounted and I have mdraid1 swap. Thanks for all the help!

Didier Roche (didrocks) wrote :

Great to hear John! Thanks for confirming and thanks to Richard for the patch.

I’m happy to SRU it to focal once it’s proposed upstream. (Keep me posted Richard, you can drop a link here and I will monitor)

John Gray (johngray) wrote :

While we are waiting for upstream to include the patch, is this bug report and attached patch something that would be suitable for me to share on zfs forums so other keen Ubuntu zfs-on-root users can have a workaround, or would that constitute rushing out a fix without review/testing?

Richard Laager (rlaager) wrote :

Can you share a bit more details about how you have yours setup? What does your partition table look like, what does the MD config look like, what do you have in /etc/fstab for swap, etc.? I'm running into weird issues with this configuration, separate from this bug.

@didrocks: I'll try to get this proposed upstream soon. If you beat me to it, I won't complain. :)

John Gray (johngray) wrote :
Download full text (11.0 KiB)

>Can you share a bit more details about how you have yours setup?

Sure!

Partitions:

root@eu1 ~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.1T 0 disk
├─sda1 8:1 0 9.1T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 9.1T 0 disk
├─sdb1 8:17 0 9.1T 0 part
└─sdb9 8:25 0 8M 0 part
nvme1n1 259:0 0 953.9G 0 disk
├─nvme1n1p1 259:2 0 5G 0 part
├─nvme1n1p2 259:3 0 820G 0 part
└─nvme1n1p3 259:4 0 128.9G 0 part
  └─md1 9:1 0 128.9G 0 raid1
    └─md1swap 253:0 0 128.9G 0 crypt [SWAP]
nvme0n1 259:1 0 953.9G 0 disk
├─nvme0n1p1 259:5 0 5G 0 part
├─nvme0n1p2 259:6 0 820G 0 part
└─nvme0n1p3 259:7 0 128.9G 0 part
  └─md1 9:1 0 128.9G 0 raid1
    └─md1swap 253:0 0 128.9G 0 crypt [SWAP]

(the nvme part 1 and 2s are set to type bf (solaris) using fdisk, and the part 3s are set to type fd (linux raid auto)

MD config was set up with the command
DISK1=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914
DISK2=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933
mdadm --create /dev/md1 --level 1 --raid-disks 2 --metadata 1.0 ${DISK1}-part3 ${DISK2}-part3

Then crypto device was configured:

root@eu1 ~ cat /etc/crypttab
# <name> <device> <password> <options>
md1swap /dev/md1 /dev/urandom swap,cipher=aes-xts-plain64,size=256

And fstab:

root@eu1 ~ cat /etc/fstab
# <filesystem> <dir> <type> <options> <dump> <pass>
/dev/mapper/md1swap none swap defaults 0 0

Then I rebooted to activate those last two.

FWIW, here is how I did the install, according to my notes, in case there is something in there (or missing from there) that makes the difference:

1. It's a Hetzner server, so I did the install from a PXE boot Debian Buster rescue console - basically the same as installing from a live CD command line rather than using the live CD installer

2. set disk variables for use by later commands
DISK1=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914
DISK2=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933

3. reset and partition disks

    parted --script $DISK1 mklabel msdos mkpart primary 1 5Gib mkpart primary 5GiB 825GiB mkpart primary 825GiB 100% set 1 boot on

    parted --script $DISK2 mklabel msdos mkpart primary 1 5Gib mkpart primary 5GiB 825GiB mkpart primary 825GiB 100% set 1 boot on

    fdisk $DISK1
      t
      1
      bf
      t
      2
      bf
      t
      3
      fd
      w

    fdisk $DISK2
      t
      1
      bf
      t
      2
      bf
      t
      3
      fd
      w

    fdisk -l
      check:
      DiskLabel type msdos
      2048 sector alignment
      bootable flag set on partition 1
      size is correct
      Type = Solaris, Solaris, Linux Raid Auto

4. create ZFS pools and datasets

  root pool
    zpool create -R /mnt -O mountpoint=none -f -o ashift=13 ssd mirror ${DISK1}-part2 ${DISK2}-part2

  root dataset
    zfs create -o acltype=posixacl -o compression=lz4 -o normalization=formD -o relatime=on -o dnodesize=auto -o xattr=sa ...

Richard Laager (rlaager) wrote :

I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388

Download full text (3.2 KiB)

Thanks! :)

On Sun, May 31, 2020 at 10:00 AM Richard Laager <email address hidden>
wrote:

> I have submitted this upstream:
> https://github.com/openzfs/zfs/pull/10388
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1875577
>
> Title:
> Encrypted swap won't load on 20.04 with zfs root
>
> Status in zfs-linux package in Ubuntu:
> Confirmed
>
> Bug description:
> root@eu1:/var/log# lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description: Ubuntu 20.04 LTS
> Release: 20.04
> Codename: focal
>
> root@eu1:/var/log# apt-cache policy cryptsetup
> cryptsetup:
> Installed: (none)
> Candidate: 2:2.2.2-3ubuntu2
> Version table:
> 2:2.2.2-3ubuntu2 500
> 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
>
> OTHER BACKGROUND INFO:
> ======================
>
> 1. machine has 2 drives. each drive is partitioned into 2 partitions,
> zfs and swap
>
>
> 2. Ubuntu 20.04 installed on ZFS root using debootstrap
> (debootstrap_1.0.118ubuntu1_all)
>
>
> 3. The ZFS root pool is a 2 partition mirror (the first partition of
> each disk)
>
>
> 4. /etc/crypttab is set up as follows:
>
> swap
> /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2
> /dev/urandom swap,cipher=aes-xts-plain64,size=256
> swap
> /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933-part2
> /dev/urandom swap,cipher=aes-xts-plain64,size=256
>
>
>
> WHAT I EXPECTED
> ===============
>
> I expected machine would reboot and have encrypted swap that used two
> devices under /dev/mapper
>
>
> WHAT HAPPENED INSTEAD
> =====================
>
>
> On reboot, swap setup fails with the following messages in
> /var/log/syslog:
>
> Apr 28 17:13:01 eu1 kernel: [ 5.360793] systemd[1]:
> cryptsetup.target: Found ordering cycle on <email address hidden>
> /start
> Apr 28 17:13:01 eu1 kernel: [ 5.360795] systemd[1]:
> cryptsetup.target: Found dependency on systemd-random-seed.service/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360796] systemd[1]:
> cryptsetup.target: Found dependency on zfs-mount.service/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360797] systemd[1]:
> cryptsetup.target: Found dependency on zfs-load-module.service/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360798] systemd[1]:
> cryptsetup.target: Found dependency on cryptsetup.target/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360799] systemd[1]:
> cryptsetup.target: Job <email address hidden>/start deleted to
> break ordering cycle starting with cryptsetup.target/start
> . . . . . .
> Apr 28 17:13:01 eu1 kernel: [ 5.361082] systemd[1]: Unnecessary job
> for /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2
> was removed
>
>
> Also, /dev/mapper does not contain any swap devices:
>
> root@eu1:/var/log# ls -l /dev/mapper
> total 0
> crw------- 1 root root 10, 236 Apr 28 17:13 control
> root@eu1:/var/log#
>
>
> And top shows no swap:
>
> MiB Swap: 0.0 total, 0.0 free, 0.0 used. 63153.6 avail
> Mem
>
> To manage n...

Read more...

Colin Ian King (colin-king) wrote :

Looks like the merge request is currently blocked. Can somebody ping me when it's merged and I'll try and get this into Ubuntu 20.10 and SRU'd into 20.04

Changed in zfs-linux (Ubuntu):
importance: Undecided → Low
assignee: nobody → Colin Ian King (colin-king)
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package zfs-linux - 0.8.4-1ubuntu5

---------------
zfs-linux (0.8.4-1ubuntu5) groovy; urgency=medium

  [ Jean-Baptiste Lallement ]
  [ Didier Roche ]
  * debian/patches/4000-zsys-support.patch:
    - Readd ZSys support erroneously dropped during previous debian merge.
    - Add external Luks store encryption support.
  * debian/patches/git_fix_dependency_loop_encryption{1,2}.patch:
    - Backport upstream patches (PR accepted, not merged yet) fixing a
      dependency loop when encryption via Luks (like swap) is enabled.
      (LP: #1875577)

 -- Didier Roche <email address hidden> Wed, 10 Jun 2020 10:48:19 +0200

Changed in zfs-linux (Ubuntu):
status: Confirmed → Fix Released
Guido Berhoerster (gber) wrote :

Could you please backport this to focal, too?

Didier Roche (didrocks) wrote :

The patch doesn’t fix all instances of the bug (see upstream report linked above). I think we should clarify that before backporting it.

Any updates? I am having this issue and I don't even have a root on zfs.
Also it success eventually, but the log contains report of the cycle following the "Job cryptsetup.target/start deleted to break ordering cycle" and then it continue booting and loads it later on during the boot.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers