UC20 stuck at installing system on specific Intel NVME stick

Bug #1878374 reported by Gavin Lin
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Snappy
Fix Released
Critical
Unassigned
snapd
Fix Released
Critical
Unassigned

Bug Description

UC20 stuck at installing system on some Intel NUC systems with and without nvme.
Tried system with/without secure boot, both can reproduce this issue.
Tried system with/without hardware TPM 2.0, both can reproduce this issue.

Screen capture:
https://drive.google.com/open?id=1BuyxqyjXucjedk2U6I4E5yqn_NABRmS9

Hardware information:
Intel NUC NUC7i3DNHE
Intel NUC NUC7i5DNHE
Intel NUC NUC7i7DNH

Installation steps:
1. Enable TPM and Secure Boot in BIOS on NUC
2. Boot the NUC to 20.04 live image
3. Clear TPM: `echo 5 | sudo tee /sys/class/tpm/tpm0/ppi/request`
4. dd the image into NVMe: `sudo xzcat ubuntu-core-20-amd64.img.xz | sudo dd of=/dev/nvme0n1 bs=32M ; sync`
5. Reboot the NUC
6. BIOS will then try to clear TPM and boot to the image in NVMe

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

Thank you for taking the time to report this bug and helping to make Ubuntu better. It seems that your bug report is not filed about a specific source package though, rather it is just filed against Ubuntu in general. It is important that bug reports be filed about source packages so that people interested in the package can find the bugs about it. You can find some hints about determining what package your bug might be about at https://wiki.ubuntu.com/Bugs/FindRightPackage. You might also ask for help in the #ubuntu-bugs irc channel on Freenode.

To change the source package that this bug is filed about visit https://bugs.launchpad.net/ubuntu/+bug/1878374/+editstatus and add the package name in the text box next to the word Package.

[This is an automated message. I apologize if it reached you inappropriately; please just reply to this message indicating so.]

tags: added: bot-comment
Changed in snapd:
importance: Undecided → Critical
tags: added: core20
Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Which image did you use? either build-timestamp, or the checksum of the compressed .xz image.

affects: ubuntu → snappy
Changed in snappy:
status: New → Incomplete
Revision history for this message
Chris Wayne (cwayne) wrote :

This was on the 20200512.3 image AFAIK

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

How strong is the NVMe-failure correlation? Does the same image work on systems without NVMe? Did it work on any system with NVMe?

Revision history for this message
Michael Vogt (mvo) wrote :

Some more questions:
- how exactly is this installed? the image is flashed to the nvme and then the system is booted?
- it would be great to get logs from the system but given how early this happens I suspect it will be hard

Revision history for this message
Michael Vogt (mvo) wrote :

I did try to reproduce this on an AMD system (asrock b450 board and ryzen cpu with WD nvme) and the install works for me.

I also did see the warning: "systemd-udevd[1]: nvme0n1p1: Failed to process device, ignoring: File exists" during the install. I also notice that we do not show kernel messages for a long time on my system, I see the first messages ~30-40s after booting and it looks like the system is hanging but eventually it did boot. Note that this is on a fast CPU and a fast nvme.

Revision history for this message
Gavin Lin (gavin.lin) wrote :

Here are SKUs we have in lab and have been tried with uc20 20200512.3 image:
NUC7i3DNHE
NUC7i5DNHE
NUC7i7DNH
NUC7PJYH
NUC7CJYH
This happens on i3/i5/i7 SKUs which has nvme and there's no nvme in other 2 SKUs.
The differnece is not just nvme, but the first failing message is about nvme so it's the first suspect.

I installed them by booting into a live system, and dd the image 20200512.3 into nvme.

The last message in screenshot is at 4492 sec, so it's more than an hour, and it's on i7 SKU.
I'll try again and wait longer, but if this is the case, we need more output to let user know it's still running.

Paul Larson (pwlars)
tags: added: uc20
removed: core20
Revision history for this message
Ian Johnson (anonymouse67) wrote :

Hi, Gavin/others in TPE with access to these systems,

Can you flash the image onto the NVME, then do the following to get a debug shell and run some commands for us?

1. Flash the image onto the nvme
2. Boot from the nvme, pressing Esc key as soon as the BIOS is done to open the grub menu (note you may need to hard-reset the device and try again if you don't get the grub menu, and there is no danger in hitting Esc multiple times, so I just hit Esc over and over until the grub menu shows up)
3. Hit "e" to edit the kernel cmdline for the "Install ..." option in grub
4. At the end of the line that begins with "chainloader (loop)/kernel.efi ..." add the following:

dangerous systemd.debug-shell=1

5. Hit F10 to boot with these kernel cmdline options
6. Once the image gets to the "Installing the system, ..." message, press the Ctrl + Alt + F9 keys to switch TTY shells and get the debug shell. Note that it might actually be Ctrl + Alt + F8 keys, I don't recall right now.
7. With a debug shell, please run the following commands:

systemctl status snapd
snap changes
snap tasks 1
journalctl -e --no-pager > journald.log
dmesg > dmesg.log

And if possible, upload the dmesg.log and journald.log files to this bug report. If the device has network/internet at this point, you can do something like this:

cat journald.log | nc termbin.com 9999

If there is not networking, you should still be able to plug in a USB flash drive and mount it with something like:

mkdir /tmp/usb
mount /dev/disk/by-label/$NAME_OF_FLASH_DRIVE_LABEL /tmp/usb
cp journald.log /tmp/usb/journald.log

Thanks

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

Re: item 6 above, the key combination to open the debug shell is ALT+F9.

Revision history for this message
Michael Vogt (mvo) wrote :

Once https://github.com/CanonicalLtd/subiquity/pull/771 has landed we should get better failure output.

Michael Vogt (mvo)
Changed in snappy:
importance: Undecided → Critical
Changed in snapd:
status: New → Incomplete
Revision history for this message
Vic Liu (zongminl) wrote :

This is the log captured right after system started "Installing system..." by following the instructions in comment #8

I plan to upload another log tar ball after the failure "Failed to start Automatically fetch and run repair assertions" shows up later.

Revision history for this message
Vic Liu (zongminl) wrote :

BTW, I used 20200518.2 to capture logs in comment #11

Revision history for this message
Michael Vogt (mvo) wrote :

Hey Vic! Thanks for this! The log of "snap tasks 2" would be amazing.

Revision history for this message
Vic Liu (zongminl) wrote :
Michael Vogt (mvo)
summary: - UC20 stuck at installing system on some Intel NUC systems with nvme
+ UC20 stuck at installing system on some Intel NUC systems
description: updated
Revision history for this message
Vic Liu (zongminl) wrote : Re: UC20 stuck at installing system on some Intel NUC systems
Revision history for this message
Vic Liu (zongminl) wrote :

Logs in comment #11, #14 and #15 were captured on Intel NUC7i7DNH w/ NVMe drive, which has both TPM and Secure Boot enabled.

Revision history for this message
Vic Liu (zongminl) wrote :

This is log captured from another Intel NUC NUC7CJYH, which is equipped with fTPM(Intel PTT) and a SATA SSD, while doing reinstalling, it has the same symptom described here.
However, with TPM cleared, it could finish reinstalling without problem.

Revision history for this message
Vic Liu (zongminl) wrote :

I tested on another Dell Latitude laptop with TPM and SATA SSD enabled, got the same result as Intel NUC NUC7CJYH described in comment #17, uc20 could be install without problem at the first time, and would require a TPM clear if reinstalling is desired.
Will continue to try some other device with NVMe drrive.

Revision history for this message
Vic Liu (zongminl) wrote :

This bug is not affecting a Dell laptop equips with a TOSHIBA XG60 NVMe drive and TPM.

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

Thanks for all the information. It is expected that the system doesn't reinstall on a system with TPM enabled unless it was cleared, however the filesystem errors in the machines with NVMe are strange.

On a machine with NVMe that has the bug, could you provide the output of:

# sfdisk -l /dev/nvme0n1

and

# lsblk -b /dev/nvme0n1 ?

Revision history for this message
Vic Liu (zongminl) wrote :
Revision history for this message
Vic Liu (zongminl) wrote :
Revision history for this message
Vic Liu (zongminl) wrote :

Replaced the NVMe in the Dell laptop described in comment #20 with the one in Intel NUC, I can observe this failure on it now, quite obvious related to this NVMe.

NVMe: Intel SSD 600P Series - SSDPEKKW128G7

Revision history for this message
Ian Johnson (anonymouse67) wrote :

Hi Vic, on a system with this NVME installed, can you get a shell into the system and run udevadm info --export-db and send us the results of running that as well? Thank you

Revision history for this message
Vic Liu (zongminl) wrote :
Revision history for this message
Ian Johnson (anonymouse67) wrote :

Vic, with comment #26, was that from the new dell laptop system or the intel NUC? Can you insert it into the other system and attach the udevadm output from that one as well?

I ask because in the lsblk output from comment #22, the major/minor numbers for the nvme device do not match with the major/minor numbers from comment #26.

In comment #22, there seemed to be no partition on the nvme disk with major/minor of 259:1 which is very odd, but in comment #26, the bios boot partition has major/minor of 259:1. The output from comment #26 seems to be consistent with our understanding of the disk layout/partitioning/kernel device numbering, but the device layout from comment #22 is very unexpected and does not match our understanding at all.

Revision history for this message
Vic Liu (zongminl) wrote :

Comment #22 was captured on NUC7i7DNH-dawson-003, comment #26 was captured on NUC7i5DNHE-dawson-002, none of them was captured on the Dell laptop in comment #24, sorry for misleading.

It's strange to me that there was a different result between #22 and #26, I don't recall I had done any special operation while capturing #26.
I captured logs again today with the latest beta build 20200521.2 on both NUC7i7DNH-dawson-003 and NUC7i5DNHE-dawson-002, they seems to be the same as comment #22, there's no partition on the nvme disk with 259:1.

Please refer the attachment below.

Revision history for this message
Vic Liu (zongminl) wrote :
Revision history for this message
Vic Liu (zongminl) wrote :
Revision history for this message
Vic Liu (zongminl) wrote :

It's worth noting that after a Intel NVMe H10 HBRPEKNX0202A was swapped into NUC7i5DNHE-dawson-002, this bug is not able to be observed.

Vic Liu (zongminl)
description: updated
description: updated
Revision history for this message
Vic Liu (zongminl) wrote :

To clarify,
Intel SSD 600P Series - SSDPEKKW128G7 in comment #24 is the original one on both of NUC7i5DNHE-dawson-002 and NUC7i5DNHE-dawson-003

Intel NVMe H10 HBRPEKNX0202A in comment #32 is a different NVMe I used to rule out if this bug really specifically related to Intel SSD 600P Series - SSDPEKKW128G7

Revision history for this message
Ian Johnson (anonymouse67) wrote :

Vic, when you have a chance can you upload a udevadm info output for the NUC when it has Intel NVMe H10 HBRPEKNX0202A installed in it?

I'm strongly starting to suspect that this bug is due to buggy behavior with the Intel SSD 600P Series SSDPEKKW128G7.

summary: - UC20 stuck at installing system on some Intel NUC systems
+ UC20 stuck at installing system on specific Intel NVME stick
Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

I found this bug in redhat's bugzilla mentioning data corruption problems when using the Intel 600P NVMe with xfs: https://bugzilla.redhat.com/show_bug.cgi?id=1402533

In that case updating the firmware to version 121C using issdfut_2.2.2 fixed the issue. From comment #54:

"I applied 121C firmware to my 600p NVMe SSD. BTW, the bootable ISO image does not work with UEFI machines, at least not with mine. I ran the firmware update from Windows 10 instead. The flashing went extremely fast; less than a second. After the firmware update, I performed a fresh install of Fedora 25 with LVM+XFS as I originally did. I have since been able to boot up, reboot, update, reboot several times without the data corruption issue I had before. This is looking pretty positive after 6 months of waiting!!"

Vic, could we check the firmware version in our faulty stick and update it if it's older than 121C?

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

This is also interesting: https://www.anandtech.com/show/12742/latest-windows-10-version-incompatible-with-intel-ssd-600p

"It is not yet clear what is causing the incompatibility between the Intel 600p and Windows 10 version 1803, but the most likely culprit is a bug in the 600p's controller firmware. Intel has issued two firmware updates for the 600p family, each fixing several bugs including some that had the potential to cause data loss or corruption. The most common theme in the release notes for 600p firmware updates seems to be power management troubles, but those are unlikely to prevent Windows 10 v1803 from even installing to the 600p."

Revision history for this message
Chris Wayne (cwayne) wrote :

Using testflinger, we were able to verify that both focal desktop and focal server can boot on these, which is rather strange. It appears that this is specifically a uc20 problem, rather than a generic kernel/20.04 problem.

Revision history for this message
Ian Johnson (anonymouse67) wrote :

Using cwayne's help I reserved one of these NUC's with the NVME and ran focal server, and found that the NVME there has up-to-date firmware. Specifically, it has firmware version PSF121C (from nvme tool @ https://github.com/linux-nvme/nvme-cli/ and according to this doc, https://www.intel.com/content/www/us/en/support/articles/000017245/memory-and-storage.html?wapkw=ssd+firmware 121C is the most recent firmware available for the Intel 600p series.

As Claudio pointed out though, there still appears to be issues with the most recent firmware on this SSD in i.e. Windows 10, so our issue here still feels quite hardware-specific and the firmware seems to be the most likely culprit in my mind.

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

Our ext4 filesystem creation parameters include -T default -O -metadata_csum -O uninit_bg -L label. I wonder if the uninit_bg feature is having a bad interaction with the 600P since classic works in the same NVMe stick (my classic system has metadata_csum instead of uninit_bg).

We could try to test this by installing the stick as a secondary storage device on a different system and try to create a filesystem with those parameters to see if we can reproduce the inode inline data error.

Revision history for this message
Vic Liu (zongminl) wrote :

These logs were captured on NUC7i5DNHE-dawson-002 with Intel NVMe H10 HBRPEKNX0202A per request in comment #34

Revision history for this message
Vic Liu (zongminl) wrote :

Here is the result I got from installing intel 600P to a Dell laptop as second storage and run mkfs.ext4:

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O -metadata_csum -O uninit_bg -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a gpt partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: 47e41fc1-c676-4259-b9ee-00cf14cd8292
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
ext2fs_update_bb_inode: Cannot iterate data blocks of an inode containing inline data while setting bad block inode

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

Thanks Vic, does it also happen if you remove the -O -metadata_csum -O uninit_bg parameters?

Revision history for this message
Vic Liu (zongminl) wrote :

This is the result of creating partition without -O -metadata_csum -O uninit_bg parameters

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -L label
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: 100f835c-a92f-4cae-b4e7-ee3db16a4f3a
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

Revision history for this message
Ian Johnson (anonymouse67) wrote :

Thanks Vic, can you try wiping the NVME, and then running with the following options and seeing what happens each time?

1. sudo mkfs.ext4 /dev/nvme1n1 -T default -O uninit_bg -O metadata_csum,64bit -L label
2. sudo mkfs.ext4 /dev/nvme1n1 -T default -O metadata_csum,64bit -L label
3. sudo mkfs.ext4 /dev/nvme1n1 -T default -O metadata_csum -L label
4. sudo mkfs.ext4 /dev/nvme1n1 -T default -O uninit_bg -L label

Thanks

Changed in snappy:
status: Incomplete → Confirmed
Changed in snapd:
status: Incomplete → Confirmed
Revision history for this message
Vic Liu (zongminl) wrote :

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O uninit_bg -O metadata_csum,64bit -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: 4d9f13b4-8187-48ba-868d-a6b5963ac5d8
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

//****************************************************************//

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O metadata_csum,64bit -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: 4a10c039-e1e9-46f4-8355-016909a2f1b3
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

//****************************************************************//

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O metadata_csum -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: c9c55d3e-f1f6-4af8-b672-8f29c6c07ee7
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

//****************************************************************//

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O uninit_bg -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: daf6373e-e57f-4475-a92d-d1d483a3723e
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

Revision history for this message
Vic Liu (zongminl) wrote :

Took a review of the parameters described in comment #39, something doesn't seem right...
"-T default -O -metadata_csum -O uninit_bg -L label", there is an additional '-' in front of metadata_csum parameter.

So I tried again without that '-' in front of metadata_csum, mkfs works fine, please check the different result below:

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O -metadata_csum -O uninit_bg -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: 0d0c9637-39bf-4119-b615-3ee91072b5e4
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
ext2fs_update_bb_inode: Cannot iterate data blocks of an inode containing inline data while setting bad block inode

//****************************************************************//

u@u-Precision-7740:~$ sudo mkfs.ext4 /dev/nvme1n1 -T default -O metadata_csum -O uninit_bg -L label
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme1n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 31258710 4k blocks and 7815168 inodes
Filesystem UUID: 3367de12-e248-47cc-bb4c-776cefdab8f3
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

Revision history for this message
Claudio Matsuoka (cmatsuoka) wrote :

Thanks for all the tests. The parameter, as used in the uc20 installer, has the `-` character in front of metadata_csum, meaning that we are disabling this feature. This is needed in uc16, but it seems clear now that it's directly related to the problem we're seeing with the 600P.

        mkfsArgs := []string{
                "mkfs.ext4",
                // default usage type
                "-T", "default",
                // disable metadata checksum, which were unsupported in Ubuntu
                // 16.04 and Ubuntu Core 16 systems and would lead to a boot
                // failure if enabled
                "-O", "-metadata_csum",
                // allow uninitialized block groups
                "-O", "uninit_bg",
        }

Revision history for this message
Łukasz Zemczak (sil2100) wrote :

If the problem seems to be in that "-O -metadata_csum" is used, I think we can just remove this line (or remove the -, if that's what it takes). From my initial analysis, the reason we had to disable it for uc16 and xenial was that xenial had e2fsprogs 1.42 where the support for metadata_csum's whas added only in 1.43 (which is in bionic+ already). I will double confirm this with Steve, but I think we can just get rid of it now safely.

Revision history for this message
Vic Liu (zongminl) wrote :

This issue could still be reproduced with uc20-beta-20200527.5

Revision history for this message
Michael Vogt (mvo) wrote :

We landed a change today that removes the "-O -metadata_csum" option which should unbreak this device.

Changed in snapd:
status: Confirmed → Fix Committed
Changed in snappy:
status: Confirmed → Fix Committed
Revision history for this message
Vic Liu (zongminl) wrote :

Tested uc20-beta-20200529.3, this bug is no longer observed on NUC7i5DNHE-dawson-002.

$ sudo sfdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 119.25 GiB, 128035676160 bytes, 250069680 sectors
Disk model: INTEL SSDPEKKW128G7
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A293C335-B31A-45B1-B9CF-253A26EA9EDC

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 4095 2048 1M BIOS boot
/dev/nvme0n1p2 4096 2461695 2457600 1.2G EFI System
/dev/nvme0n1p3 2461696 3997695 1536000 750M Linux filesystem
/dev/nvme0n1p4 3997696 250069646 246071951 117.3G Linux filesystem

Changed in snapd:
milestone: none → 2.45
Revision history for this message
Zygmunt Krynicki (zyga) wrote :

The snapd snap 2.45 is released to the stable channel, marking as released.

Changed in snapd:
status: Fix Committed → Fix Released
Changed in snappy:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.