Cannot unlock encrypted root after upgrading to 22.04 due to use of non-standard ciphers

Bug #1979159 reported by fedorowp
48
This bug affects 8 people
Affects Status Importance Assigned to Milestone
cryptsetup (Ubuntu)
Fix Released
Critical
Unassigned
Jammy
Fix Released
Critical
Unassigned
Kinetic
Fix Released
Critical
Unassigned

Bug Description

[Impact]

After upgrading to Ubuntu 22.04 with an encrypted root filesystem, the root drive can no longer be unlocked at the "Please unlock disk <diskname>" prompt on boot.

The encrypted root disk can be unlocked fine from the liveCD, but not from the initramfs environment on boot.

The issue is caused by support for various luks encryption protocols now being missing from the initramfs environment due to changes introduced in OpenSSL 3.0 and Ubuntu pre-release testing not including a test-case of upgrading older Ubuntu versions with an encrypted root to the new version.

[Test Plan]

Test a fresh installation:

* Use Ubuntu 22.04 installer
* Prepare encrypted disk layout (first partition /boot, second for /) and go one step back
* Then change hash in terminal
```
sudo cryptsetup close vda2_crypt
sudo cryptsetup luksFormat --hash=whirlpool /dev/vda2
sudo cryptsetup luksOpen /dev/vda2 vda2_crypt
sudo mkfs.ext4 /dev/mapper/vda2_crypt
```
* Continue and complete installation
* Ensure that /target/etc/crypttab exists (if not, create it and run "update-initramfs -u" in "chroot /target")
* Reboot
* The system should ask for the password during boot and successfully boot into the desktop

Test an upgrade:

* Install Ubuntu 20.04 (similar to above)
* Upgrade to Ubuntu 22.04
* Reboot
* The system should ask for the password during boot and successfully boot into the desktop

[Where problems could occur]

The changed code is called when running "update-initramfs". Therefore generating a new initramfs could fail (and the user would stay on an old one). Upgrading the package will trigger "update-initramfs". So bugs in initramfs (or it scripts) can be triggered at that time.

[Workaround]
The issue can be worked-around by:
1. Booting from the 22.04 liveCD.
2. chrooting into the target system's root.
       See https://help.ubuntu.com/community/ManualFullSystemEncryption/Troubleshooting
3. Creating a file /etc/initramfs-tools/hooks/custom-add-openssl-compat.conf containing:
---
. /usr/share/initramfs-tools/hook-functions
copy_exec /usr/lib/x86_64-linux-gnu/ossl-modules/legacy.so /usr/lib/x86_64-linux-gnu/ossl-modules/
---
4. Mark the file as executable: chmod +x /etc/initramfs-tools/hooks/custom-add-openssl-compat.conf
5. Regenerating the initramfs. ie. update-initramfs -k all -u

fedorowp (fedorowp)
description: updated
Steve Langasek (vorlon)
Changed in cryptsetup (Ubuntu):
importance: Undecided → Critical
tags: added: rls-jj-incoming rls-kk-incoming
Simon Chopin (schopin)
tags: added: transition-openssl3-jj
tags: added: fr-2498
Steve Langasek (vorlon)
Changed in cryptsetup (Ubuntu Jammy):
importance: Undecided → Critical
Lukas Märdian (slyon)
tags: removed: rls-jj-incoming rls-kk-incoming
Revision history for this message
Sébastien S (cebeatminalea) wrote :

Would there be any way to workaround the issue BEFORE the upgrade to 22.04? resorting to booting a live CD is far from ideal, in my case I've many machines to upgrade.

Revision history for this message
Sébastien S (cebeatminalea) wrote :

The upgrade from 18.04 to 20.04 was also bumpy by the way. Fortunately there were steps to mitigate the issue
 after the upgrade and BEFORE the reboot: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1877473

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in cryptsetup (Ubuntu Jammy):
status: New → Confirmed
Changed in cryptsetup (Ubuntu):
status: New → Confirmed
Revision history for this message
Thomas (survirtual) wrote (last edit ):

Alternative workaround steps (w/o live cd):

1) Access initramfs environment at boot

2) # ls /dev/mapper

3) If you only see something like "isw_bdihdgjaaf_v1" and "control", but no "isw_bdihdgjaaf_v1pX" at all, proceed. Take note of whatever the name is besides control.

4) # cd /sbin

5) # ./kpartx -a /dev/mapper/isw_bdihdgjaaf_v1 <- put your mapped device name here

6) # /scripts/local-top/cryptroot

7) # cryptroot-unlock <- it should properly prompt for your password now

I have only done this remotely, I am unsure how local would react to step 6. After running the cryptroot script, it will boot you if you are doing this remotely. SSH back into the system, then do step 7.

Changed in cryptsetup (Ubuntu Jammy):
milestone: none → ubuntu-22.04.1
Revision history for this message
Jesper Zedlitz (jesper-zedlitz) wrote :

Since my laptop does not boot from a USB drive I have to use the method described by Thomas. However, that does not work. Command #2 shows only "control". /sbin/ does not contain the kpartx binary. Command #6 asks for my password but does newer return. If I try command #7 it says "Try again later".

Do you have any idea how to fix this fatal bug? I suspect that many users with laptops will be affected, since they usually encrypt their hard drives.

Revision history for this message
Steve Langasek (vorlon) wrote :

Thomas's workaround is not a workaround for the bug described here. I do not know what bug he is trying to work around, but it is not this one.

If you cannot boot from USB (why?), you should be able to use the GRUB menu at boot time to temporarily downgrade to the focal kernel until this bug is resolved.

Revision history for this message
Jesse Johnson (holocronweaver) wrote :

> pre-release testing not including a test-case of upgrading older Ubuntu versions with an encrypted root to the new version.

Is this true? If so, is there a separate issue tracking adding this test case for future releases?

I use the Ubuntu installer LVM encryption scheme on all systems - I assumed this was being tested and fully supported since it is not a hidden option.

Revision history for this message
Brian Murray (brian-murray) wrote :

It'd be helpful to get some clarification as to how and which release people initially installed Ubuntu. Thanks!

To test this I installed an Ubuntu 20.04 LTS desktop using ubiquity and choose LVM and encryption. I then upgraded the system to Ubuntu 22.04 LTS and did not encounter an error unlocking the encrypted root partition.

Revision history for this message
Benjamin Drung (bdrung) wrote :

There are two question:

1. Did a previous Ubuntu installation use an encryption algorithm that is now only supported by legacy.so? Since Brian failed to reproduce it with Ubuntu 20.04 LTS, can you tell which Ubuntu version was initially installed by pasting the content of /var/log/installer/media-info?

2. Which encryption algorithm do you device use? Can you run "sudo cryptsetup luksDump $your_device" and paste the output. Please remove UUID, Salt, Digest to avoid leaking sensitive data. Following added grep command should do the trick:

sudo cryptsetup luksDump $your_device | grep -Ev $'^\t* *(UUID|Salt|Digest:|[ 0-9a-f]+$)'

Revision history for this message
Benjamin Drung (bdrung) wrote :

Output on Ubuntu 22.04 (from a Ubuntu 21.10 install and upgrade):

$ sudo cryptsetup luksDump /dev/nvme0n1p1 | grep -Ev $'^\t* *(UUID|Salt|Digest:|[ 0-9a-f]+$)'
LUKS header information
Version: 2
Epoch: 3
Metadata area: 16384 [bytes]
Keyslots area: 16744448 [bytes]
Label: (no label)
Subsystem: (no subsystem)
Flags: (no flags)

Data segments:
  0: crypt
 offset: 16777216 [bytes]
 length: (whole device)
 cipher: aes-xts-plain64
 sector: 512 [bytes]

Keyslots:
  0: luks2
 Key: 512 bits
 Priority: normal
 Cipher: aes-xts-plain64
 Cipher key: 512 bits
 PBKDF: argon2i
 Time cost: 9
 Memory: 1048576
 Threads: 4
 AF stripes: 4000
 AF hash: sha256
 Area offset:32768 [bytes]
 Area length:258048 [bytes]
 Digest ID: 0
Tokens:
Digests:
  0: pbkdf2
 Hash: sha256
 Iterations: 336082

Revision history for this message
Benjamin Drung (bdrung) wrote :

Ubuntu 18.04 installation with encryption:

$ sudo cryptsetup luksDump /dev/vda5
LUKS header information for /dev/vda5

Version: 1
Cipher name: aes
Cipher mode: xts-plain64
Hash spec: sha256
Payload offset: 4096
MK bits: 512
MK digest: 55 0b b3 df c2 ba 30 0c 1d b1 80 9d 22 38 af 0d 60 6d de 0f
MK salt: 26 a0 b7 c7 7f 94 d4 2e 70 e7 33 32 19 b9 49 8d
                90 21 c6 4e 2a 80 31 1c c8 5e ef 8e e8 21 2f ed
MK iterations: 143091
UUID: 9f59f808-c690-482c-8e58-0f5551b2af8f

Key Slot 0: ENABLED
 Iterations: 2289466
 Salt: 29 f6 c7 43 d5 0b f1 a1 42 71 2b fb 93 a0 6f 48
                        b6 94 87 c5 ac 39 0e 67 33 02 30 ee 6d 93 2e f0
 Key material offset: 8
 AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Revision history for this message
Simon Chopin (schopin) wrote :

I had a quick look at the issue:

LUKS changed its default hashing algorithm from SHA1 to SHA256 back in 1.7.0, which was first shipped in Ubuntu in yakkety (16.10, I believe?). I don't know if we've ever supported other methods for disk encryption, e.g. cryptsetup "plain mode"?

IMHO we should at least try to detect if a mounted partition is using LUKS with sha1 and abort the upgrade with instructions to re-encrypt the partition.

As a subsidiary question, what if they're using non-default ciphers, e.g. blowfish? Do we support that?

Revision history for this message
Benjamin Drung (bdrung) wrote :

Fresh Ubuntu 16.04.2 desktop installation with encryption:

$ sudo cryptsetup luksDump /dev/vda5
LUKS header information for /dev/vda5

Version: 1
Cipher name: aes
Cipher mode: xts-plain64
Hash spec: sha256
Payload offset: 4096
MK bits: 512
MK digest: [...]
MK salt: [...]
MK iterations: 147375
UUID: [...]

Key Slot 0: ENABLED
    Iterations: 589861
    Salt: [...]
    Key material offset: 8
    AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Revision history for this message
Benjamin Drung (bdrung) wrote :

I upgraded this Ubuntu 16.04.2 installation through all LTSes to Ubuntu 22.04. The upgrades went fine and Ubuntu 22.04 booted correctly:

$ lsinitramfs /boot/initrd.img | grep -E 'lib(ssl|crypto)'
usr/lib/x86_64-linux-gnu/libcrypto.so.3

Revision history for this message
Steve Langasek (vorlon) wrote :

Here's a system that was installed with FDE using Lucid:

# cat /var/log/installer/media-info
Ubuntu 10.04.1 LTS "Lucid Lynx" - Release amd64 (20100816.1)
# cryptsetup luksDump /dev/sda2
LUKS header information for /dev/sda2

Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 1032
MK bits: 128
MK digest: [...]
MK salt: [...]
MK iterations: 10
UUID: [...]

Key Slot 0: ENABLED
        Iterations: 198357
        Salt: [...]
        Key material offset: 8
        AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Revision history for this message
Steve Langasek (vorlon) wrote :

@fedorowp, it would be helpful if you would post the same output from your system so we have a reproducer we know will *actually* address your issue.

Revision history for this message
Benjamin Drung (bdrung) wrote :

Thanks Steve.

| Key | Ubuntu 10.04 | Ubuntu >= 16.04 |
| Cipher mode | cbc-essiv:sha256 | xts-plain64 |
| Hash spec | sha1 | sha256 |

For testing, we can create a LUKS device that uses cipher mode cbc-essiv:sha256 and/or hash spec sha1. Then we should be able to reproduce the issue.

Suggested solution: Let openssl ship a initramfs-tools hook that does following:

1. Get the list of encrypted disks:
dmsetup ls --target crypt
2. Map the disk name (e.g. system_crypt) to a dm-X name:
readlink -f readlink -f /dev/mapper/$name
3. Get slave device (i.e. the underlying disk):
ls -1 /sys/block/dm-X/slaves/
4. For each underlying disk check the cipher mode / hash spec:
cryptsetup luksDump /dev/$disk | grep "^Hash spec: $legacy"
5. If the cipher mode / hash spec is legacy, include /usr/lib/x86_64-linux-gnu/ossl-modules/legacy.so

Revision history for this message
Steve Langasek (vorlon) wrote : Re: [Bug 1979159] Re: Cannot unlock encrypted root after upgrading to 22.04

On Mon, Aug 01, 2022 at 06:58:20PM -0000, Benjamin Drung wrote:
> 1. Get the list of encrypted disks:
> dmsetup ls --target crypt
> 2. Map the disk name (e.g. system_crypt) to a dm-X name:
> readlink -f readlink -f /dev/mapper/$name
> 3. Get slave device (i.e. the underlying disk):
> ls -1 /sys/block/dm-X/slaves/
> 4. For each underlying disk check the cipher mode / hash spec:
> cryptsetup luksDump /dev/$disk | grep "^Hash spec: $legacy"
> 5. If the cipher mode / hash spec is legacy, include /usr/lib/x86_64-linux-gnu/ossl-modules/legacy.so

That seems pretty complicated, vs simply including legacy.so
unconditionally. The size increase is trivial compared to the plymouth
graphics stack, so I don't see any reason not to?

Revision history for this message
Benjamin Drung (bdrung) wrote : Re: Cannot unlock encrypted root after upgrading to 22.04

I had a deeper look at the code. cryptsetup-initramfs ships /usr/share/initramfs-tools/hooks/cryptroot which already analyzes the volumes. populate_CRYPTO_MODULES determines the cipher mode. Adding legacy.so for MODULES=most will be fine, but not for other modes.

Revision history for this message
Benjamin Drung (bdrung) wrote :

I tried to reproduce it in a Ubuntu 22.04 VM by preparing disks with following options:

```
sudo apt install cryptsetup-bin cryptsetup-initramfs
sudo cryptsetup luksFormat --cipher=aes-cbc-essiv:sha256 --hash=sha1 /dev/vdb
sudo cryptsetup luksFormat --cipher=aes-cbc-essiv:sha256 /dev/vdc
sudo cryptsetup luksFormat --hash=sha1 /dev/vdd
```

Then added break=bottom to the kernel options:

```
sudo vim /etc/default/grub
-> GRUB_CMDLINE_LINUX_DEFAULT="quiet splash break=bottom"
sudo update-grub
```

Rebooted and tried to open the disks in the initramfs:

```
cryptsetup luksOpen /dev/vdb vdb
cryptsetup luksOpen /dev/vdc vdc
cryptsetup luksOpen /dev/vdd vdd
```

All opened fine. Need to further investigate. It does look like it's the cipher mode or hash spec.

Revision history for this message
Benjamin Drung (bdrung) wrote :

I still fail to reproduce the issue. I installed Ubuntu 10.04 in a VM and ran "sudo cryptsetup luksFormat /dev/vdX". The resulting volume has same luksDump output than what Steve posted in comment 16. Then I attached that volume to a Ubuntu 22.04 VM. Opening this volume in initramfs with "cryptsetup luksOpen /dev/vde vde" worked.

To exclude that some other parts of the initramfs script fail, I formatted the volumes with ext4 and added them to /etc/crypttab:

```
vdb /dev/vdb none luks
vdc /dev/vdc none luks
vdd /dev/vdd none luks
vde /dev/vde none luks
```

and /etc/fstab:

```
/dev/mapper/vdb /mnt/vdb ext4 defaults 0 0
/dev/mapper/vdc /mnt/vdc ext4 defaults 0 0
/dev/mapper/vdd /mnt/vdd ext4 defaults 0 0
/dev/mapper/vde /mnt/vde ext4 defaults 0 0
```

Then updated the initramfs with "sudo update-initramfs -u" and rebooted. I got asked for the volume passwords and the initramfs successfully mounted them all. The initramfs did not contain the legacy.so:

```
$ lsinitramfs /boot/initrd.img | grep legacy
usr/lib/modules/5.15.0-43-generic/kernel/drivers/ata/pata_legacy.ko
```

Revision history for this message
asi (gmazyland) wrote :

LUKS1/LUKS2 never used any algorithm in *default* setting that is now in legacy OpenSSL3 provider (RIPEMD160 or Whirlpool hash is in legacy now).
Also, cryptsetup (since version2.4.0) always tries to load both default and legacy provider for OpenSSL3.

That said, you can try luksFormat with --hash ripemd160 (that is only in legacy) but I doubt this is the problem you are seeing here.

If user had plain device instead of LUKS (that uses RIPEMD160 by default) then you need legacy always.

Also the problem can be in kernel modules (usually missing kernel crypto module in initramfs; note xts need aslo ecb module to work).

Revision history for this message
Benjamin Drung (bdrung) wrote :

Still unable to reproduce after an upgrade test. I prepared a luks volume on Ubuntu 10.04. Then used this volume to install Ubuntu 20.04 on it and then upgraded to Ubuntu 22.04. The upgrade went fine and the initramfs correctly unlocked the volume.

Revision history for this message
Benjamin Drung (bdrung) wrote :

/sbin/cryptsetup is linked against /usr/lib/x86_64-linux-gnu/libcrypto.so.3 which is shipped by libssl3. libssl3 also ships /usr/lib/x86_64-linux-gnu/ossl-modules/legacy.so.

Attached a patch that will include /usr/lib/${arch}/libcrypto.so.3 when MODULES=most is used. For other `MODULES` setting (e.g. `dep`) it needs to be determined if the legacy library is needed.

tags: added: patch
Revision history for this message
Benjamin Drung (bdrung) wrote :

Thanks @gmazyland. I created a volume with --hash=ripemd160 and it failed to open in the initramfs (without legacy.so):

```
(initramfs) cryptsetup luksOpen /dev/vdb vdb
Enter passphrase for /dev/vdb:
Keyslot open failed.
```

Since that volume was not the root volume, the boot continued and it was correctly unlocked after the initramfs phase. So only user with a legacy hash algorithm on their root partition will have a problem.

Thanks to this test case, I can develop a better patch now.

Revision history for this message
Benjamin Drung (bdrung) wrote :

I could reproduce the issue with encrypted volumes created as following:

```
sudo cryptsetup luksFormat --hash=ripemd160 $volume
sudo cryptsetup luksFormat --hash=whirlpool $volume
```

I developed the attached patch to include legacy.so for root or /usr volumes that use the hash ripemd160 or whirlpool. Successfully tested in a VM:

* Use Ubuntu 22.04 installer
* Prepare encrypted disk layout (first partition /boot, second for /) and go one step back
* Then change hash in terminal
```
sudo cryptsetup close vda2_crypt
sudo cryptsetup luksFormat --hash=whirlpool /dev/vda2
sudo cryptsetup luksOpen /dev/vda2 vda2_crypt
sudo mkfs.ext4 /dev/mapper/vda2_crypt
```
* Continue and complete installation
* Ensure that /target/etc/crypttab exists
* Apply workaround from bug description
* Reboot into installation
* Apply attached patch
* Remove workaround
* Run "update-initramfs -u" and reboot
* Systems boots correcty

Revision history for this message
Benjamin Drung (bdrung) wrote :
Revision history for this message
Benjamin Drung (bdrung) wrote :

jammy SRU patch attached.

Revision history for this message
Benjamin Drung (bdrung) wrote :

Interesting discussion outcome in https://salsa.debian.org/cryptsetup-team/cryptsetup/-/merge_requests/31:

@guilhem pointed me to https://www.openssl.org/docs/man3.0/man7/OSSL_PROVIDER-legacy.html#Hashing-Algorithms-Message-Digests – OSSL_PROVIDER-legacy does not only provide hashing algorithms, but also ciphers (like blowfish). After that discovery, the code needed to determine if legacy.so is needed or not will be longer to a point of risking some breakage in the future. Therefore I will change the patch to include legacy.so if the initramfs needs to open an encrypted volume.

Revision history for this message
Benjamin Drung (bdrung) wrote :

Correction: The ciphers are handled by the kernel and legacy.so is not needed for ciphers like blowfish. So then lets stick to the plan to check for legacy hashes.

Revision history for this message
Benjamin Drung (bdrung) wrote :

I got a lot of feedback in https://salsa.debian.org/cryptsetup-team/cryptsetup/-/merge_requests/31 that resulted in a bigger rework.

Uploaded cryptsetup 2:2.5.0-1ubuntu1 to kinetic and 2:2.4.3-1ubuntu1.1 (see attached cryptsetup_2.4.3-1ubuntu1.1_v2.patch) to jammy as SRU.

Changed in cryptsetup (Ubuntu Jammy):
status: Confirmed → Fix Committed
Changed in cryptsetup (Ubuntu Kinetic):
status: Confirmed → Fix Committed
description: updated
Revision history for this message
Benjamin Drung (bdrung) wrote :

https://salsa.debian.org/cryptsetup-team/cryptsetup/-/merge_requests/31 got merged with minor changes.

Yesterday I claimed that I uploaded the jammy SRU, but didn't. I just uploaded this final merged version patch (attached cryptsetup_2.4.3-1ubuntu1.1_v3.patch) to Ubuntu 22.04 (jammy) now.

Revision history for this message
Jesse Johnson (holocronweaver) wrote :

Thanks Ben for all the work! Do you need help testing?

Reiterating a prior comment:

> pre-release testing not including a test-case of upgrading older Ubuntu versions with an encrypted root to the new version.

Is this true? If so, is there a separate issue tracking adding this test case for future releases?

I use the Ubuntu installer LVM encryption scheme on all systems - I assumed it was being tested for all upgrades before release.

Revision history for this message
Łukasz Zemczak (sil2100) wrote :

Hey Ben! Any reason why the jammy upload has the patch without the minor changes from the Debian review? I'll accept it as-is since I don't think those small changes (like echo usage over printf etc.) warrant a re-upload, but I was wondering if there was a reason for that.

tags: added: verification-needed verification-needed-jammy
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Please test proposed package

Hello fedorowp, or anyone else affected,

Accepted cryptsetup into jammy-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/cryptsetup/2:2.4.3-1ubuntu1.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-jammy to verification-done-jammy. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-jammy. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Revision history for this message
Łukasz Zemczak (sil2100) wrote : Re: Cannot unlock encrypted root after upgrading to 22.04

Also, before we actually proceed with releasing this to -updates, can you provide a Regression Potential (aka. [Where problems could occur]) section to the bug description? The test-case is quite good, but it would be also necessary to think of all the regression scenarios this could cause - especially for a package like cryptsetup.

Benjamin Drung (bdrung)
description: updated
description: updated
Revision history for this message
Benjamin Drung (bdrung) wrote :

Hi Łukasz, the patch v3 and the upload to jammy have the changes from the Debian review:
https://launchpadlibrarian.net/616651605/cryptsetup_2%3A2.4.3-1ubuntu1_2%3A2.4.3-1ubuntu1.1.diff.gz

The kinetic version lacks the minor changes from Debian, but the next merge will include them.

I added a regression potential to the description.

Revision history for this message
Benjamin Drung (bdrung) wrote :

How can I test the upgrade? I prepared a VM with Ubuntu 20.04, but upgrading to jammy with proposed enable is not allowed:

```
$ update-manager -c --proposed --devel-release
The options --devel-release and --proposed are
mutually exclusive. Please use only one of them.
```

Revision history for this message
Brian Murray (brian-murray) wrote :

The automated upgrade testing that the Ubuntu QA team is currently performing does not include the various installation options (encrypted file systems, zfs, automatic login), so we generally what we are testing is the upgrade path of packages in the Ubuntu archive. However, we would like to expand our upgrade testing in the future to cover different installation configurations.

Revision history for this message
Benjamin Drung (bdrung) wrote :

I ran the upgrade test plan from the bug description. Since the fixed cryptsetup is still in jammy-proposed, I modified the steps as following:

* Install Ubuntu 20.04 (see bug description)
* Upgrade to Ubuntu 22.04 (no reboot)
* Enable jammy-proposed
* sudo apt dist-upgrade
* Reboot

The system rebooted correctly asking for the password and unlocking the root partition that uses the whirlpool hash. So I mark this bug with verification-done-jammy.

@Jesse, the more testers the better. If you want to help, you can test the upgrade on your systems, but before rebooting at the end of the upgrade, enable jammy-proposed and install all updates.

tags: added: verification-done verification-done-jammy
removed: verification-needed verification-needed-jammy
Revision history for this message
Jesse Johnson (holocronweaver) wrote (last edit ):

Success!

I upgraded from Ubuntu 20.04 to 22.04 on an circa-2016 ASUS laptop using Ben's instructions in comment #41. After rebooting was prompted for password to unlock the device which worked as expected and led to the login screen and then the desktop. Opened a few apps and browsed the file system, everything seems to be working and all the files on the encrypted root and home partitions seem available!

IIRC the device had previously been upgraded from 18.04 to 20.04.

This is a laptop I regularly use, not a test device, so this was a 'real' successful upgrade in the wild. =)

Thanks all for the fix!

Hopefully prior to next LTS release the automated testing can be improved to include encrypted file systems and other scenarios as Brian mentioned in #40. Could help tighten release dates and increase user confidence (not to mention help Ubuntu devs sleep better at night!). If this issue had affected my non-techy users as part of a regular upgrade it may have torpedoed their confidence in Ubuntu and Linux by association. (Almost every non-tech I know who uses Ubuntu had it installed for them on a laptop with file system encryption enabled for privacy and security.)

Revision history for this message
Launchpad Janitor (janitor) wrote :
Download full text (4.6 KiB)

This bug was fixed in the package cryptsetup - 2:2.5.0-1ubuntu1

---------------
cryptsetup (2:2.5.0-1ubuntu1) kinetic; urgency=medium

  * Merge from Debian unstable. Remaining changes:
    - debian/control:
      + Recommend plymouth.
      + Depend on busybox-initramfs instead of busybox | busybox-static.
      + Move cryptsetup-initramfs back to cryptsetup's Recommends.
      + Do not build cryptsetup-suspend binary package on i386.
    - Fix cryptroot-unlock for busybox compatibility.
    - Fix warning and error when running on ZFS on root: (LP: #1830110)
      - d/functions: Return an empty devno for ZFS devices as they don't have
        major:minor device numbers.
      - d/initramfs/hooks/cryptroot: Ignore and don't print an error message
        when devices don't have a devno.
    - debian/patches/decrease_memlock_ulimit.patch
      Fixed FTBFS due to a restricted build environment
    - Stop building the udeb on request.
  * d/initramfs/hooks/cryptroot: Include OpenSSL legacy.so for ripemd160 and
    whirlpool hash algorithms (LP: #1979159)
  * Disable failing Debian-tailored cryptroot-* autopkgtests, see bug #1983522

cryptsetup (2:2.5.0-1) unstable; urgency=medium

  * d/copyright: Fix licence for tokens/ssh/cryptsetup-ssh.c.
  * Remove patches applied upstream.
  * Rename 'ssh-plugin-test' to 'ssh-test-plugin'.
  * Add DEP-8 tests for cryptroot unlocking at early boot stage.

cryptsetup (2:2.5.0~rc1-3) experimental; urgency=medium

  * DEP-8: Add 'Features: test-name=' in order to name inline tests.
  * d/t/control: Add 'Restrictions: rw-build-tree' to upstream-testsuite.
  * d/control: Remove cryptsetup-reencrypt from cryptsetup-bin package
    description since the utility was removed upstream in v2.5.0-rc1.
  * d/changelog: Retroactively correct 2:2.4.0~rc0-1+exp1 entry.
  * Update d/patches with what's landed upstream since v2.5.0-rc1.
  * d/patches, d/rules: Pass $(LDFLAGS) when building fake_token_path.so and
    no longer silence blhc(1) for test files.
  * Move SSH token plugin stuff into new binary package 'cryptsetup-ssh'.
    That plugin is arguably not useful for everyone and we can save the
    'Depends: libssh-4' on cryptsetup-bin by moving cryptsetup-ssh(8) and
    libcryptsetup-token-ssh.so to a separate package. Since LUKS2 SSH token
    support was added after the Bullseye release, and since it is still in
    experimental stage, we don't let cryptsetup-bin or cryptsetup depend on
    the new binary package. Users who need that feature will need to install
    it manually.

cryptsetup (2:2.5.0~rc1-2) experimental; urgency=medium

  * localtest: Treat skipped tests as failure for full coverage.
  * d/watch: Add uversionmangle option for release candidates.
  * unit-wipe-test: Skip DIO tests when the file system doesn't support
    O_DIRECT. This is needed on the buildds where the source tree appears to
    be on a tmpfs.

cryptsetup (2:2.5.0~rc1-1) experimental; urgency=low

  * New upstream release candidate 2.5.0. Highlights include:
    + Remove cryptsetup-reencrypt(8) executable, use `cryptsetup reencrypt`
      instead (for both LUKS1 and LUKS2).
    + Split manual pages into per-action pages, for inst...

Read more...

Changed in cryptsetup (Ubuntu Kinetic):
status: Fix Committed → Fix Released
Revision history for this message
Brian Murray (brian-murray) wrote :

Out of an abundance of caution I upgraded cryptsetup on my laptop which uses an encrypted / partition but not an older hash algorithm and did not encounter any issues.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package cryptsetup - 2:2.4.3-1ubuntu1.1

---------------
cryptsetup (2:2.4.3-1ubuntu1.1) jammy; urgency=medium

  * d/initramfs/hooks/cryptroot: Include OpenSSL legacy.so for ripemd160 and
    whirlpool hash algorithms (LP: #1979159)

 -- Benjamin Drung <email address hidden> Thu, 04 Aug 2022 14:08:01 +0200

Changed in cryptsetup (Ubuntu Jammy):
status: Fix Committed → Fix Released
Revision history for this message
Brian Murray (brian-murray) wrote : Update Released

The verification of the Stable Release Update for cryptsetup has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Jesse Johnson (holocronweaver) wrote (last edit ): Re: Cannot unlock encrypted root after upgrading to 22.04

I seem to have encountered this bug during a upgrade via the official upgrade pop-up notification. Has this fix in fact been moved to -updates and would be included in a regular upgrade?

I upgraded on another Ubuntu 20.04 desktop PC after it received the update notification, but could not unlock my encrypted root after reboot, instead receiving an error "Gave up waiting for root device, ubuntu--vg-root does not exist". The workaround via live boot + chroot did not seem to work, and neither did running update + dist-upgrade, though I was able to unlock manually via live boot. I was pressed for time so did not pursue it further and did a full reinstall since I had a backup.

I did not enable the jammy-proposed for this failed upgrade, which makes me suspect this fix is not being included in `do-release-upgrade`.

Revision history for this message
Jesse Johnson (holocronweaver) wrote :

Created a followup bug to verify the fix has in fact been released and there isn't another encrypted disk issue afoot: https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1986868

Steve Langasek (vorlon)
summary: - Cannot unlock encrypted root after upgrading to 22.04
+ Cannot unlock encrypted root after upgrading to 22.04 due to use of non-
+ standard ciphers
Revision history for this message
Benjamin Drung (bdrung) wrote :

If you use `do-release-upgrade` or `update-manager` to upgrade your system from 20.04 to 22.04, you will get the fixed cryptsetup.

If you have cryptsetup >= 2:2.4.3-1ubuntu1.1 installed and the workaround in the bug description does not solve the boot issue for you, you experience a different bug. Please file a new bug report in this case.

Revision history for this message
Thomas (survirtual) wrote :

The workaround I posted earlier is, in fact, a work around for this bug — in my case, it is for a server with an initramfs environ with dropbear to ssh into and no physical access.

Please don’t use such conclusive language that is false. There are all kinds of environments and circumstances, and bugs can have multiple dimensions. What I shared allowed me to access my system around this bug without interruption and surely can help others do the same, who have a similar environment (ubuntu server + remote decryption via ssh).

They then can upgrade the system to get a lasting fix.

Thanks for all the hard work.

Revision history for this message
Steve Langasek (vorlon) wrote : Re: [Bug 1979159] Re: Cannot unlock encrypted root after upgrading to 22.04 due to use of non-standard ciphers

On Sun, Aug 28, 2022 at 08:42:29PM -0000, Thomas wrote:
> The workaround I posted earlier is, in fact, a work around for this bug
> — in my case, it is for a server with an initramfs environ with dropbear
> to ssh into and no physical access.

No. That is a different bug than the one being reported here.

> Please don’t use such conclusive language that is false.

The language is conclusive because you are wrong. There is no way the
workaround you posted can address a problem with missing cipher support in
the openssl included in the initramfs. You should file a different bug
report for your issue, not pile on to a different bug report which is
unrelated to the issue you're experiencing.

Revision history for this message
Benjamin Drung (bdrung) wrote :

Thomas, my comment was about #1979159-47 from Jesse Johnson. Your workaround in #1979159-5 is quite general for this kind of symptom. The symptom described in this bug can be caused by multiple underlying issues. This bug report is only about exactly one underlying issue (missing support for non-standard ciphers in initramfs). If you or someone else have cryptsetup >= 2:2.4.3-1ubuntu1.1 installed and the workaround in the bug description does not solve the boot issue for you, you experience a different underlying issue. Please file a new bug report in this case to find this other underlying issue and fix it.

Revision history for this message
Thomas (survirtual) wrote :

Benjamin, I understand. All my symptoms are identical but it seems I do have cryptsetup >= 2:2.4.3-1ubuntu1.1 and still experience the issue, so it is not this bug. The workaround I posted earlier also requires some weird timing for it to work appropriately. Anyway, since I am not familiar with the standards here and was just looking to help other people access their system w/o a livecd, I'll just leave it at that.

Revision history for this message
Sruli (sruli) wrote :

This issue is now slowly creeping in to all my machines, not when do-release-upgrade but at regular updates, my machines are all 22.04, started last week when I could not unlock my laptop I thought I must be getting old and doing something wrong with the password, was going crazy, 2 days ago my wife tells me she can't unlock, I maintained she isn't putting the password in correctly, yesterday my daughter tells me she can't unlock and it clicked that this must be an issue with a regular update.

Current kernel is 5.15.0.67, trying to boot from 5.15.0.60 did not help.

To resolve the issue I from livecd i chrooted and added initramfs hook for `/usr/lib/x86_64-linux-gnu/ossl-modules/legacy.so`

On this particular machine the first install was 14.04,
`root@ubuntu:/# cat /var/log/installer/media-info
Ubuntu-GNOME 14.04.5 LTS "Trusty Tahr" - Release amd64 (20160803)`

luks header,

root@ubuntu:/# cryptsetup luksDump /dev/sda3 | grep -Ev $'^\t* *(UUID|Salt|Digest:|[ 0-9a-f]+$)'
LUKS header information for /dev/sda3

Version: 1
Cipher name: aes
Cipher mode: xts-plain64
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: [...]
MK salt: [...
                   ...]
MK iterations: 52625

Key Slot 0: DISABLED
Key Slot 1: ENABLED
    Iterations: 2177726
                              [...]
    Key material offset: 264
    AF stripes: 4000
Key Slot 2: ENABLED
    Iterations: 1865792
                              [...]
    Key material offset: 520
    AF stripes: 4000
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

This is a huge issue, to let this creep in through regular system updates is bricking the system for many people.

Revision history for this message
Benjamin Drung (bdrung) wrote :

Sruli, please open a new bug report to analyze your issue. Your problem has probably a different underlying issue.

The output of `cryptsetup luksDump` shows that you use sha1 as hash spec and all hashes and ciphers looks like the default that should not need the legacy library. Can you check if regenerating the initramfs from a live system without adding legcay.so would also fix the issue?

Revision history for this message
Matthias Haller (ebby) wrote :

I had the same problem with my focal system. The computer ran for a few days and meanwhile updates were installed. After a reboot, my password for the root partition was no longer accepted.

The solution that worked for me was to convert the LUKS header to version 2.
It worked with these instructions: https://gist.github.com/Edu4rdSHL/8f97eb1bab454fb2b348f1167cee7cd2

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.