Encrypted swap won't load on 20.04 with zfs root

Bug #1875577 reported by John Gray
70
This bug affects 10 people
Affects Status Importance Assigned to Milestone
zfs-linux (Ubuntu)
Fix Released
Low
Colin Ian King
Focal
Fix Released
Medium
Heitor Alves de Siqueira

Bug Description

[Impact]
Encrypted swap partitions may not load correctly with ZFS root, due to ordering cycle on zfs-mount.service.

[Test Plan]
1. Install Ubuntu 20.04 using ZFS-on-root
2. Add encrypted partition to /etc/crypttab:
   swap /dev/nvme0n1p1 /dev/urandom swap,cipher=aes-xts-plain64,size=256
3. Add swap partition to /etc/fstab:
   /dev/mapper/swap none swap sw 0 0
4. Reboot and check whether swap has loaded correctly, and whether boot logs show ordering cycle:
[ 6.638228] systemd[1]: systemd-random-seed.service: Found ordering cycle on zfs-mount.service/start
[ 6.639418] systemd[1]: systemd-random-seed.service: Found dependency on zfs-import.target/start
[ 6.640474] systemd[1]: systemd-random-seed.service: Found dependency on zfs-import-cache.service/start
[ 6.641637] systemd[1]: systemd-random-seed.service: Found dependency on cryptsetup.target/start
[ 6.642734] systemd[1]: systemd-random-seed.service: Found dependency on <email address hidden>/start
[ 6.643951] systemd[1]: systemd-random-seed.service: Found dependency on systemd-random-seed.service/start
[ 6.645098] systemd[1]: systemd-random-seed.service: Job zfs-mount.service/start deleted to break ordering cycle starting with systemd-random-seed.service/start
[ SKIP ] Ordering cycle found, skipping Mount ZFS filesystems

[Where problems could occur]
Since we're changing the zfs-mount-generator service, regressions could show up during mounting of ZFS partitions. We should thoroughly test different scenarios of ZFS such as ZFS-on-root, separate ZFS partitions and the presence of swap, to make sure all partitions are mounted correctly and no ordering cycles are present.

Below is a list of suggested test scenarios that we should check for regressions:
1. ZFS-on-root + encrypted swap (see "Test Plan" section above)
2. Encrypted root + separate ZFS partitions
3. ZFS on LUKS
4. ZFS on dm-raid

Although scenario 4 is usually advised against (ZFS itself should handle RAID), it's a good smoke test to validate that mount order is being handled correctly.

[Other Info]
This has been fixed upstream by the following commits:
* ec41cafee1da Fix a dependency loop [0]
* 62663fb7ec19 Fix another dependency loop [1]

The patches above have been introduced in version 2.1.0, with upstream backports to zfs-2.0. In Ubuntu, it's present in Groovy and later releases, so it's still needed in Focal.

$ rmadison -a source zfs-linux
 zfs-linux | 0.8.3-1ubuntu12 | focal | source
 zfs-linux | 0.8.3-1ubuntu12.9 | focal-security | source
 zfs-linux | 0.8.3-1ubuntu12.10 | focal-updates | source
 zfs-linux | 0.8.4-1ubuntu11 | groovy | source
 zfs-linux | 0.8.4-1ubuntu11.2 | groovy-updates | source
 zfs-linux | 2.0.2-1ubuntu5 | hirsute | source
 zfs-linux | 2.0.3-8ubuntu5 | impish | source

[0] https://github.com/openzfs/zfs/commit/ec41cafee1da
[1] https://github.com/openzfs/zfs/commit/62663fb7ec19

ORIGINAL DESCRIPTION
====================

root@eu1:/var/log# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04 LTS
Release: 20.04
Codename: focal

root@eu1:/var/log# apt-cache policy cryptsetup
cryptsetup:
  Installed: (none)
  Candidate: 2:2.2.2-3ubuntu2
  Version table:
     2:2.2.2-3ubuntu2 500
        500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

OTHER BACKGROUND INFO:
======================

1. machine has 2 drives. each drive is partitioned into 2 partitions, zfs and swap

2. Ubuntu 20.04 installed on ZFS root using debootstrap (debootstrap_1.0.118ubuntu1_all)

3. The ZFS root pool is a 2 partition mirror (the first partition of each disk)

4. /etc/crypttab is set up as follows:

swap /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2 /dev/urandom swap,cipher=aes-xts-plain64,size=256
swap /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933-part2 /dev/urandom swap,cipher=aes-xts-plain64,size=256

WHAT I EXPECTED
===============

I expected machine would reboot and have encrypted swap that used two devices under /dev/mapper

WHAT HAPPENED INSTEAD
=====================

On reboot, swap setup fails with the following messages in /var/log/syslog:

Apr 28 17:13:01 eu1 kernel: [ 5.360793] systemd[1]: cryptsetup.target: Found ordering cycle on <email address hidden>/start
Apr 28 17:13:01 eu1 kernel: [ 5.360795] systemd[1]: cryptsetup.target: Found dependency on systemd-random-seed.service/start
Apr 28 17:13:01 eu1 kernel: [ 5.360796] systemd[1]: cryptsetup.target: Found dependency on zfs-mount.service/start
Apr 28 17:13:01 eu1 kernel: [ 5.360797] systemd[1]: cryptsetup.target: Found dependency on zfs-load-module.service/start
Apr 28 17:13:01 eu1 kernel: [ 5.360798] systemd[1]: cryptsetup.target: Found dependency on cryptsetup.target/start
Apr 28 17:13:01 eu1 kernel: [ 5.360799] systemd[1]: cryptsetup.target: Job <email address hidden>/start deleted to break ordering cycle starting with cryptsetup.target/start
. . . . . .
Apr 28 17:13:01 eu1 kernel: [ 5.361082] systemd[1]: Unnecessary job for /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2 was removed

Also, /dev/mapper does not contain any swap devices:

root@eu1:/var/log# ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Apr 28 17:13 control
root@eu1:/var/log#

And top shows no swap:

MiB Swap: 0.0 total, 0.0 free, 0.0 used. 63153.6 avail Mem

Revision history for this message
Steve Langasek (vorlon) wrote :

I don't know exactly why this manifests as a dependency loop, but your /etc/crypttab is certainly wrong; the first column of /etc/crypttab is the target device name, and you cannot have two separate source encrypted devices map to the same decrypted device name.

If you give the two devices separate names (e.g. swap1, swap2), does this work for you?

If not this should probably be reassigned to systemd, since those systemd units are created by a systemd generator and not by the cryptsetup package.

Changed in cryptsetup (Ubuntu):
status: New → Incomplete
Revision history for this message
John Gray (johngray) wrote :

Thanks, and sorry about that error. I corrected the crpyttab to swap1 and swap2 as suggested, but still got a similar error message:

Apr 30 18:06:38 eu1 kernel: [ 5.786143] systemd[1]: systemd-random-seed.service: Found ordering cycle on zfs-mount.service/start
Apr 30 18:06:38 eu1 kernel: [ 5.789051] systemd[1]: systemd-random-seed.service: Found dependency on zfs-load-module.service/start
Apr 30 18:06:38 eu1 kernel: [ 5.796988] systemd[1]: systemd-random-seed.service: Found dependency on cryptsetup.target/start
Apr 30 18:06:38 eu1 kernel: [ 5.806521] systemd[1]: systemd-random-seed.service: Found dependency on <email address hidden>/start
Apr 30 18:06:38 eu1 kernel: [ 5.815563] systemd[1]: systemd-random-seed.service: Found dependency on systemd-random-seed.service/start
Apr 30 18:06:38 eu1 kernel: [ 5.824697] systemd[1]: systemd-random-seed.service: Job zfs-mount.service/start deleted to break ordering cycle starting with systemd-random-seed.service/start

affects: cryptsetup (Ubuntu) → systemd (Ubuntu)
Steve Langasek (vorlon)
Changed in systemd (Ubuntu):
status: Incomplete → New
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in systemd (Ubuntu):
status: New → Confirmed
Dan Streetman (ddstreet)
affects: systemd (Ubuntu) → zfs-linux (Ubuntu)
Revision history for this message
Dan Streetman (ddstreet) wrote :

This isn't a systemd bug, it's a bug in zfs's zfs-mount.service, adding in this upstream commit:
https://github.com/openzfs/zfs/commit/8ae8b2a1445bcccee1bb8ee7d4886f30050f6f53

That orders zfs-mount.service *before* systemd-random-seed.service, while systemd's generator for cryptsetup orders *after* systemd-random-seed.service:
https://github.com/systemd/systemd/blob/master/src/cryptsetup/cryptsetup-generator.c#L193

zfs-mount.service is then caught in a dependency loop:
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found ordering cycle on zfs-import.target/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on zfs-import-cache.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on cryptsetup.target/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on <email address hidden>/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on systemd-random-seed.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on zfs-mount.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Job zfs-import.target/start deleted to break ordering cycle starting with zfs-mount>
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found ordering cycle on zfs-load-module.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on cryptsetup.target/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on <email address hidden>/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on systemd-random-seed.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Found dependency on zfs-mount.service/start
May 04 14:37:19 lp1875577-f systemd[1]: zfs-mount.service: Job zfs-load-module.service/start deleted to break ordering cycle starting with zfs>

@didrocks, I think you added the zfs change upstream, maybe it needs to be adjusted to handle it some other way? In general, services shouldn't order themselves before systemd-random-seed without very careful consideration.

Revision history for this message
Richard Laager (rlaager) wrote :

This is a tricky one because all of the dependencies make sense in isolation. Even if we remove the dependency added by that upstream OpenZFS commit, given that modern systems use zfs-mount-generator, systemd-random-seed.service is going to Require= and After= var-lib.mount because of its RequiresMountsFor=/var/lib/systemd/random-seed. The generated var-lib.mount will be After=zfs-import.target because you can't mount a filesystem without importing the pool. And zfs-import.target is After= the two zfs-import-* services. Those are after cryptsetup.target, as you might be running your pool on top of LUKS.

Mostly side note: it does seem weird and unnecessary that zfs-load-module.service has After=cryptsetup.target. We should probably remove that. That is coming from debian/patches/2100-zfs-load-module.patch (which is what provides zfs-load-module.service in its entirety).

One idea here would be to eliminate the After=cryptsetup.target from zfs-import-{cache,scan}.service and require that someone add them via a drop-in if they are running on LUKS. However, in that case, they'll run into the same problem anyway. So that's not really a fix.

Another option might be to remove the zfs-mount.service Before=systemd-random-seed.service and effectively require the use of the mount generator for Root-on-ZFS setups. That is what the Ubuntu installer does and what the Root-on-ZFS HOWTO will use for 20.04 anyway. (I'm working on it actively right now.) Then, modify zfs-mount-generator to NOT After=zfs-import.target (and likewise for Wants=) if the relevant pool is already imported (and likewise for the zfs-load-key- services). Since the rpool will already be imported by the time zfs-mount-generator runs, that would be omitted.

I've attached an *untested* patch to that effect. I hope to test this yet tonight as I test more Root-on-ZFS scenarios, but no promises.

Revision history for this message
Richard Laager (rlaager) wrote :

John Gray: Everything else aside, you should mirror your swap instead of striping it (which I think is what you're doing). With your current setup, if a disk dies, your system will crash.

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "2150-fix-systemd-dependency-loops.patch" seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
Revision history for this message
Richard Laager (rlaager) wrote :

I didn't get a chance to test the patch. I'm running into unrelated issues.

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

Your patch makes sense Richard and I think it will be a good upstream candidates. In all approaches you proposed, this is my prefered one because this is the most flexible IMHO.

Tell me when you get a chance to test it and maybe John, you can confirm this fixes it for you?

Revision history for this message
John Gray (johngray) wrote :
Download full text (3.6 KiB)

Hi Richard and Dider, thanks - I have set up encrypted swap on mdraid1 instead. It works but is subject to the same cycle issue - sometimes swap doesn't load, sometimes the boot zfs pool won't mount.

I went to apply the patch, but my system doesn't seem to have the two files that are referenced,
etc/systemd/system-generators/zfs-mount-generator.in
and
/etc/systemd/system/zfs-mount.service.in
and so the patch command failed.

I have probably misunderstood something, sorry... I'm way out of my depth here. FWIW here are the contents of the directories for the missing files:

root@eu1 ~ ls -l /etc/systemd
total 94
-rw-r--r-- 1 root root 1042 Apr 22 19:04 journald.conf
-rw-r--r-- 1 root root 1042 Apr 22 19:04 logind.conf
drwxr-xr-x 2 root root 2 Apr 22 19:04 network
-rw-r--r-- 1 root root 584 Apr 2 04:23 networkd.conf
-rw-r--r-- 1 root root 529 Apr 2 04:23 pstore.conf
-rw-r--r-- 1 root root 634 Apr 22 19:04 resolved.conf
-rw-r--r-- 1 root root 790 Apr 2 04:23 sleep.conf
drwxr-xr-x 15 root root 21 May 3 13:08 system
-rw-r--r-- 1 root root 1759 Apr 22 19:04 system.conf
-rw-r--r-- 1 root root 604 Apr 22 19:04 timesyncd.conf
drwxr-xr-x 3 root root 3 May 2 18:24 user
-rw-r--r-- 1 root root 1185 Apr 22 19:04 user.conf

(system-generators is missing)

root@eu1 ~ ls -l /etc/systemd/system/
total 90
lrwxrwxrwx 1 root root 44 Apr 27 17:56 dbus-org.freedesktop.resolve1.service -> /lib/systemd/system/systemd-resolved.service
lrwxrwxrwx 1 root root 36 Apr 27 18:15 dbus-org.freedesktop.thermald.service -> /lib/systemd/system/thermald.service
lrwxrwxrwx 1 root root 45 Apr 27 17:56 dbus-org.freedesktop.timesync1.service -> /lib/systemd/system/systemd-timesyncd.service
drwxr-xr-x 2 root root 3 Apr 27 17:56 default.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 emergency.target.wants
drwxr-xr-x 2 root root 3 Apr 27 17:56 getty.target.wants
drwxr-xr-x 2 root root 4 May 3 13:08 mdmonitor.service.wants
drwxr-xr-x 2 root root 21 Apr 28 19:21 multi-user.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 rescue.target.wants
drwxr-xr-x 2 root root 11 May 3 13:08 sockets.target.wants
lrwxrwxrwx 1 root root 31 Apr 27 18:02 sshd.service -> /lib/systemd/system/ssh.service
drwxr-xr-x 2 root root 10 May 3 13:08 sysinit.target.wants
lrwxrwxrwx 1 root root 35 Apr 27 17:56 syslog.service -> /lib/systemd/system/rsyslog.service
drwxr-xr-x 2 root root 10 May 2 18:29 timers.target.wants
lrwxrwxrwx 1 root root 35 Apr 27 18:05 zed.service -> /lib/systemd/system/zfs-zed.service
drwxr-xr-x 2 root root 3 Apr 27 18:05 zfs-import.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 zfs-mount.service.wants
drwxr-xr-x 2 root root 8 Apr 27 18:05 zfs.target.wants
drwxr-xr-x 2 root root 3 Apr 27 18:05 zfs-volumes.target.wants

(zfs-mount.service.in is missing)

The system was installed with debootstrap:

dpkg --install debootstrap_1.0.118ubuntu1_all.deb
debootstrap --arch=amd64 focal /mnt http://archive.ubuntu.com/ubuntu/

I did not install any zfs packages separately - the system only has what debootstrap gave it:

root@eu1 ~ dpkg -l | grep zfs
ii libzfs2linux 0.8.3-1ubuntu12 amd64 OpenZFS file...

Read more...

Revision history for this message
John Gray (johngray) wrote :

...actually going over my notes again, I actually did install grub-related zfs packages separately from debootstrap:

apt install --yes zfsutils-linux
apt install --yes zfs-initramfs

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

On an installed packaged system, the files are in different directories (and don’t have the .in extension as they have been built with the prefix replacement). Their names and locations are:

/lib/systemd/system/zfs-mount.service
/lib/systemd/system-generators/zfs-mount-generator

Revision history for this message
John Gray (johngray) wrote :

I changed the path names in the patch file and it applied. I rebooted and it worked! :-)

May 5 23:06:33 eu1 kernel: [ 6.480412] Adding 135128956k swap on /dev/mapper/md1swap. Priority:-2 extents:1 across:135128956k SSFS

I have all my ZFS filesystems mounted and I have mdraid1 swap. Thanks for all the help!

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

Great to hear John! Thanks for confirming and thanks to Richard for the patch.

I’m happy to SRU it to focal once it’s proposed upstream. (Keep me posted Richard, you can drop a link here and I will monitor)

Revision history for this message
John Gray (johngray) wrote :

While we are waiting for upstream to include the patch, is this bug report and attached patch something that would be suitable for me to share on zfs forums so other keen Ubuntu zfs-on-root users can have a workaround, or would that constitute rushing out a fix without review/testing?

Revision history for this message
Richard Laager (rlaager) wrote :

Can you share a bit more details about how you have yours setup? What does your partition table look like, what does the MD config look like, what do you have in /etc/fstab for swap, etc.? I'm running into weird issues with this configuration, separate from this bug.

@didrocks: I'll try to get this proposed upstream soon. If you beat me to it, I won't complain. :)

Revision history for this message
John Gray (johngray) wrote :
Download full text (11.0 KiB)

>Can you share a bit more details about how you have yours setup?

Sure!

Partitions:

root@eu1 ~ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.1T 0 disk
├─sda1 8:1 0 9.1T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 9.1T 0 disk
├─sdb1 8:17 0 9.1T 0 part
└─sdb9 8:25 0 8M 0 part
nvme1n1 259:0 0 953.9G 0 disk
├─nvme1n1p1 259:2 0 5G 0 part
├─nvme1n1p2 259:3 0 820G 0 part
└─nvme1n1p3 259:4 0 128.9G 0 part
  └─md1 9:1 0 128.9G 0 raid1
    └─md1swap 253:0 0 128.9G 0 crypt [SWAP]
nvme0n1 259:1 0 953.9G 0 disk
├─nvme0n1p1 259:5 0 5G 0 part
├─nvme0n1p2 259:6 0 820G 0 part
└─nvme0n1p3 259:7 0 128.9G 0 part
  └─md1 9:1 0 128.9G 0 raid1
    └─md1swap 253:0 0 128.9G 0 crypt [SWAP]

(the nvme part 1 and 2s are set to type bf (solaris) using fdisk, and the part 3s are set to type fd (linux raid auto)

MD config was set up with the command
DISK1=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914
DISK2=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933
mdadm --create /dev/md1 --level 1 --raid-disks 2 --metadata 1.0 ${DISK1}-part3 ${DISK2}-part3

Then crypto device was configured:

root@eu1 ~ cat /etc/crypttab
# <name> <device> <password> <options>
md1swap /dev/md1 /dev/urandom swap,cipher=aes-xts-plain64,size=256

And fstab:

root@eu1 ~ cat /etc/fstab
# <filesystem> <dir> <type> <options> <dump> <pass>
/dev/mapper/md1swap none swap defaults 0 0

Then I rebooted to activate those last two.

FWIW, here is how I did the install, according to my notes, in case there is something in there (or missing from there) that makes the difference:

1. It's a Hetzner server, so I did the install from a PXE boot Debian Buster rescue console - basically the same as installing from a live CD command line rather than using the live CD installer

2. set disk variables for use by later commands
DISK1=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914
DISK2=/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933

3. reset and partition disks

    parted --script $DISK1 mklabel msdos mkpart primary 1 5Gib mkpart primary 5GiB 825GiB mkpart primary 825GiB 100% set 1 boot on

    parted --script $DISK2 mklabel msdos mkpart primary 1 5Gib mkpart primary 5GiB 825GiB mkpart primary 825GiB 100% set 1 boot on

    fdisk $DISK1
      t
      1
      bf
      t
      2
      bf
      t
      3
      fd
      w

    fdisk $DISK2
      t
      1
      bf
      t
      2
      bf
      t
      3
      fd
      w

    fdisk -l
      check:
      DiskLabel type msdos
      2048 sector alignment
      bootable flag set on partition 1
      size is correct
      Type = Solaris, Solaris, Linux Raid Auto

4. create ZFS pools and datasets

  root pool
    zpool create -R /mnt -O mountpoint=none -f -o ashift=13 ssd mirror ${DISK1}-part2 ${DISK2}-part2

  root dataset
    zfs create -o acltype=posixacl -o compression=lz4 -o normalization=formD -o relatime=on -o dnodesize=auto -o xattr=sa ...

Revision history for this message
Richard Laager (rlaager) wrote :

I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388

Revision history for this message
John Gray (johngray-au) wrote : Re: [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root
Download full text (3.2 KiB)

Thanks! :)

On Sun, May 31, 2020 at 10:00 AM Richard Laager <email address hidden>
wrote:

> I have submitted this upstream:
> https://github.com/openzfs/zfs/pull/10388
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1875577
>
> Title:
> Encrypted swap won't load on 20.04 with zfs root
>
> Status in zfs-linux package in Ubuntu:
> Confirmed
>
> Bug description:
> root@eu1:/var/log# lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description: Ubuntu 20.04 LTS
> Release: 20.04
> Codename: focal
>
> root@eu1:/var/log# apt-cache policy cryptsetup
> cryptsetup:
> Installed: (none)
> Candidate: 2:2.2.2-3ubuntu2
> Version table:
> 2:2.2.2-3ubuntu2 500
> 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
>
> OTHER BACKGROUND INFO:
> ======================
>
> 1. machine has 2 drives. each drive is partitioned into 2 partitions,
> zfs and swap
>
>
> 2. Ubuntu 20.04 installed on ZFS root using debootstrap
> (debootstrap_1.0.118ubuntu1_all)
>
>
> 3. The ZFS root pool is a 2 partition mirror (the first partition of
> each disk)
>
>
> 4. /etc/crypttab is set up as follows:
>
> swap
> /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2
> /dev/urandom swap,cipher=aes-xts-plain64,size=256
> swap
> /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933-part2
> /dev/urandom swap,cipher=aes-xts-plain64,size=256
>
>
>
> WHAT I EXPECTED
> ===============
>
> I expected machine would reboot and have encrypted swap that used two
> devices under /dev/mapper
>
>
> WHAT HAPPENED INSTEAD
> =====================
>
>
> On reboot, swap setup fails with the following messages in
> /var/log/syslog:
>
> Apr 28 17:13:01 eu1 kernel: [ 5.360793] systemd[1]:
> cryptsetup.target: Found ordering cycle on <email address hidden>
> /start
> Apr 28 17:13:01 eu1 kernel: [ 5.360795] systemd[1]:
> cryptsetup.target: Found dependency on systemd-random-seed.service/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360796] systemd[1]:
> cryptsetup.target: Found dependency on zfs-mount.service/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360797] systemd[1]:
> cryptsetup.target: Found dependency on zfs-load-module.service/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360798] systemd[1]:
> cryptsetup.target: Found dependency on cryptsetup.target/start
> Apr 28 17:13:01 eu1 kernel: [ 5.360799] systemd[1]:
> cryptsetup.target: Job <email address hidden>/start deleted to
> break ordering cycle starting with cryptsetup.target/start
> . . . . . .
> Apr 28 17:13:01 eu1 kernel: [ 5.361082] systemd[1]: Unnecessary job
> for /dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2
> was removed
>
>
> Also, /dev/mapper does not contain any swap devices:
>
> root@eu1:/var/log# ls -l /dev/mapper
> total 0
> crw------- 1 root root 10, 236 Apr 28 17:13 control
> root@eu1:/var/log#
>
>
> And top shows no swap:
>
> MiB Swap: 0.0 total, 0.0 free, 0.0 used. 63153.6 avail
> Mem
>
> To manage n...

Read more...

Revision history for this message
Colin Ian King (colin-king) wrote :

Looks like the merge request is currently blocked. Can somebody ping me when it's merged and I'll try and get this into Ubuntu 20.10 and SRU'd into 20.04

Changed in zfs-linux (Ubuntu):
importance: Undecided → Low
assignee: nobody → Colin Ian King (colin-king)
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package zfs-linux - 0.8.4-1ubuntu5

---------------
zfs-linux (0.8.4-1ubuntu5) groovy; urgency=medium

  [ Jean-Baptiste Lallement ]
  [ Didier Roche ]
  * debian/patches/4000-zsys-support.patch:
    - Readd ZSys support erroneously dropped during previous debian merge.
    - Add external Luks store encryption support.
  * debian/patches/git_fix_dependency_loop_encryption{1,2}.patch:
    - Backport upstream patches (PR accepted, not merged yet) fixing a
      dependency loop when encryption via Luks (like swap) is enabled.
      (LP: #1875577)

 -- Didier Roche <email address hidden> Wed, 10 Jun 2020 10:48:19 +0200

Changed in zfs-linux (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
Guido Berhoerster (gber) wrote :

Could you please backport this to focal, too?

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

The patch doesn’t fix all instances of the bug (see upstream report linked above). I think we should clarify that before backporting it.

Revision history for this message
Vasili Pupkin (vasilipupkin765) wrote :

Any updates? I am having this issue and I don't even have a root on zfs.
Also it success eventually, but the log contains report of the cycle following the "Job cryptsetup.target/start deleted to break ordering cycle" and then it continue booting and loads it later on during the boot.

Revision history for this message
Dan Streetman (ddstreet) wrote :

@halves could you take a look at this?

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in zfs-linux (Ubuntu Focal):
status: New → Confirmed
Changed in zfs-linux (Ubuntu Focal):
importance: Undecided → Medium
assignee: nobody → Heitor Alves de Siqueira (halves)
Revision history for this message
Heitor Alves de Siqueira (halves) wrote :
description: updated
Changed in zfs-linux (Ubuntu Focal):
status: Confirmed → In Progress
description: updated
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Please test proposed package

Hello John, or anyone else affected,

Accepted zfs-linux into focal-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/zfs-linux/0.8.3-1ubuntu12.12 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-focal to verification-done-focal. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-focal. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in zfs-linux (Ubuntu Focal):
status: In Progress → Fix Committed
Revision history for this message
Heitor Alves de Siqueira (halves) wrote :
Download full text (4.2 KiB)

Validated with ZFS from focal-proposed, according to test case from description:
ubuntu@z-rotomvm34:~$ dpkg -l | grep zfsutils
ii zfsutils-linux 0.8.3-1ubuntu12.12 amd64 command-line tools to manage OpenZFS filesystems
ubuntu@z-rotomvm34:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 2.50G 25.6G 176K /
rpool/ROOT 2.50G 25.6G 176K none
rpool/ROOT/zfsroot 2.50G 25.6G 2.50G /
ubuntu@z-rotomvm34:~$ sudo journalctl -b | grep -i ordering
ubuntu@z-rotomvm34:~$ lsblk -e 7
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 30G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
└─vda2 252:2 0 29.5G 0 part
nvme0n1 259:0 0 9.8G 0 disk
└─nvme0n1p1 259:1 0 9.8G 0 part
  └─swap 253:0 0 9.8G 0 crypt [SWAP]

In addition to the test above, I've also tested the configurations suggested in the [Test Plan] section. Besides validating the ordering bug, I've also done basic smoke tests and verified that the ZFS pools are working as expected.

- Encrypted rootfs on LVM + separate ZFS partitions:
ubuntu@ubuntu-focal:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 492K 4.36G 96K /mnt/zfspool
zfspool/tank 96K 4.36G 96K /mnt/zfspool/tank
ubuntu@ubuntu-focal:~$ dpkg -l | grep zfsutils
ii zfsutils-linux 0.8.3-1ubuntu12.12 amd64 command-line tools to manage OpenZFS filesystems
ubuntu@ubuntu-focal:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 492K 4.36G 96K /mnt/zfspool
zfspool/tank 96K 4.36G 96K /mnt/zfspool/tank
ubuntu@ubuntu-focal:~$ sudo journalctl -b | grep -i ordering
ubuntu@ubuntu-focal:~$ lsblk -e7
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 30G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
├─vda2 252:2 0 1K 0 part
├─vda5 252:5 0 731M 0 part /boot
└─vda6 252:6 0 28.8G 0 part
  └─vda6_crypt 253:0 0 28.8G 0 crypt
    ├─vgubuntu--focal-root 253:1 0 27.8G 0 lvm /
    └─vgubuntu--focal-swap_1 253:2 0 980M 0 lvm [SWAP]
vdb 252:16 0 5G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb9 252:25 0 8M 0 part

- ZFS on LUKS
ubuntu@z-rotomvm33:~$ dpkg -l | grep zfsutils
ii zfsutils-linux 0.8.3-1ubuntu12.12 amd64 command-line tools to manage OpenZFS filesystems
ubuntu@z-rotomvm33:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 612K 9.20G 96K /mnt/zfspool
zfspool/tank 96K 9.20G 96K /mnt/zfspool/tank
ubuntu@z-rotomvm33:~$ sudo journalctl -b | grep -i ordering
ubuntu@z-rotomvm33:~$ lsblk -e7
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 30G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
└─vda2 252:2 ...

Read more...

tags: added: verification-done verification-done-focal
Revision history for this message
Chris Halse Rogers (raof) wrote : Update Released

The verification of the Stable Release Update for zfs-linux has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package zfs-linux - 0.8.3-1ubuntu12.12

---------------
zfs-linux (0.8.3-1ubuntu12.12) focal; urgency=medium

  * Fix dependency loop preventing swap partitions from being mounted
    correctly (LP: #1875577)
    - d/p/4900-Fix-a-dependency-loop.patch
    - d/p/4901-Fix-another-dependency-loop.patch

 -- Heitor Alves de Siqueira <email address hidden> Mon, 12 Jul 2021 14:36:13 +0000

Changed in zfs-linux (Ubuntu Focal):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.