zfs on root fails with grub syntax error with multidisks pools

Bug #1848856 reported by jpb
92
This bug affects 14 people
Affects Status Importance Assigned to Milestone
grub2 (Ubuntu)
Fix Released
Medium
Jean-Baptiste Lallement
Focal
Fix Released
Medium
Jean-Baptiste Lallement
grubzfs-testsuite (Ubuntu)
Fix Released
Medium
Jean-Baptiste Lallement
Focal
Fix Released
Medium
Jean-Baptiste Lallement

Bug Description

[Description]
When a pool is created on several devices like cache and log on separate devices, mirror or raidz. When grub-probe queries the target to report the device attached to the boot directory, it reports all the devices that make the pool, one by line. The result is an error in 10_linux_zfs that generates an invalid grub configuration file. Only the case 1 pool = 1 device was considered

[Test Case]
1. Create a mirrored pool
$ zpool create mirror /dev/Xda1 /dev/Xda2

2. run update-grub

Expected result:
grub configuration is generated successfully.

Actual result:
The generated grub configuration file is incomplete and its syntax is invalid

[Regression Potential]
Low. The patch takes the first device returned by grub-probe which is the first device of data of the mirror.

[Original Description]
At the end of the upgrade from 19.04 to 19.10, the post process of the update-grub reports:

Syntax error at line 185
Syntax errors are detected in generated GRUB config file.
Ensure that there are no errors in /etc/default/grub
and /etc/grub.d/* files or please file a bug report with
/boot/grub/grub.cfg.new file attached.
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 1
dpkg: error processing package linux-image-5.3.0-18-generic (--configure):
 installed linux-image-5.3.0-18-generic package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 friendly-recovery
 grub-efi-amd64
 grub-efi
 grub-efi-amd64-signed
 shim-signed
 linux-image-5.3.0-18-generic

The system used https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS to add zfs on root to a 19.04 system.

The syntax error in grub.cfg.new is an extra } on line 185. However, comparing the grub.cfg.new to the previously generated grub.cfg under 19.04, there is a significant quantity of configuration missing.

Manually running update-grub generates the same error. /etc/default/grub is the only file changed from default installation to include zswap. This file was not changed prior to upgrade.

The error is reported during the processing of /etc/grub.d/10_linux_zfs which is dated October 11. I attempted the upgrade on 10/18 and have done multiple updates to get the latest kernel and remove old kernels prior to the upgrade. So, I believe the problem is with one of the upgrade modules.

Revision history for this message
jpb (jbrown10) wrote :
Revision history for this message
jpb (jbrown10) wrote :
Revision history for this message
jpb (jbrown10) wrote :
Revision history for this message
jpb (jbrown10) wrote :
Revision history for this message
jpb (jbrown10) wrote :
Revision history for this message
jpb (jbrown10) wrote :

Looking closer at the 19.04 grub.cfg and the post19.10 grub.cfg.new I see that the 19.04 version of update-grub was using 10_linux instead of 10_linux_zfs to generate the grub.cfg.new.

Revision history for this message
jpb (jbrown10) wrote :

One more piece of information. I have 3 zfs pools defined:
rpool for my root
dpool for my data (virtualization and other)
bpool for my external usb backup device

So, my bpool is different than the 19.10 new bpool to support the boot process. I don't know if this is a problem, but thought I'd mention it since it duplicates a pool name.

Revision history for this message
jpb (jbrown10) wrote :

I modified 10_linux_zfs to comment out the set -e and add the variables for pkgdatadir and the GRUB* variables from /etc/default/grub and ran it to see the output.

Notice at the end the double brace, lack of initrd value and linux image even though it was found at the beginning when it correctly identified the root dataset.

jpb@explorer:~/test$ sudo ./10_linux_zfs
Found linux image: vmlinuz-5.0.0-29-generic in rpool/ROOT/ubuntu
Found initrd image: initrd.img-5.0.0-29-generic in rpool/ROOT/ubuntu
Found linux image: vmlinuz-5.0.0-31-generic in rpool/ROOT/ubuntu
Found initrd image: initrd.img-5.0.0-31-generic in rpool/ROOT/ubuntu
Found linux image: vmlinuz-5.0.0-32-generic in rpool/ROOT/ubuntu
Found initrd image: initrd.img-5.0.0-32-generic in rpool/ROOT/ubuntu
Found linux image: vmlinuz-5.3.0-18-generic in rpool/ROOT/ubuntu
Found initrd image: initrd.img-5.3.0-18-generic in rpool/ROOT/ubuntu
Found linux image: vmlinuz-5.0.0-23-generic in rpool/ROOT/ubuntu@pyznap_2019-08-18_14:11:36_monthly
Found initrd image: initrd.img-5.0.0-23-generic in rpool/ROOT/ubuntu@pyznap_2019-08-18_14:11:36_monthly

********** removed a ton of snapshots that were listed which found the linux image and initrd image. *****************

Found linux image: vmlinuz-5.3.0-18-generic in rpool/ROOT/ubuntu@apt2019-10-19_08.52.43--1w
Found initrd image: initrd.img-5.3.0-18-generic in rpool/ROOT/ubuntu@apt2019-10-19_08.52.43--1w

function gfxmode {
 set gfxpayload="${1}"
 if [ "${1}" = "keep" ]; then
  set vt_handoff=vt.handoff=1
 else
  set vt_handoff=
 fi
}
if [ "${recordfail}" != 1 ]; then
  if [ -e ${prefix}/gfxblacklist.txt ]; then
    if hwmatch ${prefix}/gfxblacklist.txt 3; then
      if [ ${match} = 0 ]; then
        set linux_gfx_mode=keep
      else
        set linux_gfx_mode=text
      fi
    else
      set linux_gfx_mode=text
    fi
  else
    set linux_gfx_mode=keep
  fi
else
  set linux_gfx_mode=text
fi
export linux_gfx_mode
menuentry 'Ubuntu 19.10' --class ubuntu --class gnu-linux --class gnu --class os ${menuentry_id_option} 'gnulinux-rpool/ROOT/ubuntu-' {
 recordfail
 load_video
 gfxmode ${linux_gfx_mode}
 insmod gzio
 if [ "${grub_platform}" = xen ]; then insmod xzio; insmod lzopio; fi
 insmod part_gpt
 insmod zfs
 set root='hd2,gpt1'
 if [ x$feature_platform_search_hint = xy ]; then
   search --no-floppy --fs-uuid --set=root --hint-bios=hd2,gpt1 --hint-efi=hd2,gpt1 --hint-baremetal=ahci2,gpt1 ba04856b80ac4244
 else
   search --no-floppy --fs-uuid --set=root ba04856b80ac4244
 fi
 linux root=ZFS=rpool/ROOT/ubuntu ro intel_iommu=on iommu=pt quiet splash intel_iommu=on iommu=pt rootdelay=3 zswap.enabled=1 zswap.compressor=lz4 zswap.zpool=z3fold ${vt_handoff}
 initrd
}
}

Revision history for this message
jpb (jbrown10) wrote :

zsys is currently not installed.

Revision history for this message
jpb (jbrown10) wrote :

I'm getting closer to finding the problem. It looks like in function get_dataset_info() in 10_linux_zfs the content of initrd_list, kernel_list, and last_booted_kernel are not being echoed back on the call. I don't know why yet. I haven't fully grasp the shell script.

Revision history for this message
Norberto Bensa (nbensa) wrote :

Is your rpool mirrored?

On a test enviroment if I detach one of the drives, I can run update-grub. Reattaching the drive breaks update-grub.

I'm currently stuck in 19.04 because of this bug.

# zpool status

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 2h59m with 0 errors on Sun Oct 13 03:23:12 2019
config:

        NAME STATE READ WRITE CKSUM
        zroot ONLINE 0 0 0
          mirror-0 ONLINE 0 0 0
            sda3 ONLINE 0 0 0
            sdb3 ONLINE 0 0 0

Revision history for this message
jpb (jbrown10) wrote :

Yes, my rpool is mirrored.
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
 still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
 the pool may no longer be accessible by software that does not support
 the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 05:45:25 with 0 errors on Sun Oct 13 06:09:31 2019
config:

 NAME STATE READ WRITE CKSUM
 rpool ONLINE 0 0 0
   mirror-0 ONLINE 0 0 0
     wwn-0x5000cca24ce19a84-part1 ONLINE 0 0 0
     wwn-0x5000c5004e60c802-part1 ONLINE 0 0 0
 logs
   nvme-eui.00000000010000004ce00018dd8c9084-part2 ONLINE 0 0 0
 cache
   nvme0n1p4 ONLINE 0 0 0

errors: No known data errors

In 10_linux_zfs, there is an if statement that echos data:

    if [ -n "${initrd_list}" -a -n "${kernel_list}" ]; then
        echo "${dataset}\t${is_zsys}\t${machine_id}\t${pretty_name}\t${last_used}\t${initrd_device}\t${initrd_list}\t${kernel_list}\t${last_booted_kernel}"
    else
        grub_warn "didn't find any valid initrd or kernel."
    fi

The problem is that execution time, the last 3 parameters are dropped even though they have data.

+ [ -n /ROOT/ubuntu@/boot/initrd.img-5.3.0-18-generic|/ROOT/ubuntu@/boot/initrd.img-5.0.0-32-generic|/ROOT/ubuntu@/boot/initrd.img-5.0.0-31-generic|/ROOT/ubuntu@/boot/initrd.img-5.0.0-29-generic -a -n /ROOT/ubuntu@/boot/vmlinuz-5.3.0-18-generic|/ROOT/ubuntu@/boot/vmlinuz-5.0.0-32-generic|/ROOT/ubuntu@/boot/vmlinuz-5.0.0-31-generic|/ROOT/ubuntu@/boot/vmlinuz-5.0.0-29-generic ]
+ echo rpool/ROOT/ubuntu\t-\t6d41e97f07794e0b9d409db9a99529a5\tUbuntu 19.10\t1571542278\t/dev/sdc1

You can see from the expanded debug of the run (set -xv) that both initrd_list and kernel_list had data that included the kernel and initrd but within the if, it gets dropped. That is causing the generated grub.cfg.new to be missing the kernel, initrd, and the advanced submenu items that would normally be generated.

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in grub2 (Ubuntu):
status: New → Confirmed
Revision history for this message
jpb (jbrown10) wrote :

My current work around is to manually maintain the grub.cfg in /boot/grub. Using the version that was generated under 19.04, I have simply updated the kernel and initrd to reflect the new modules.

This still means that any package I install that needs to regenerate the grub.cfg will fail. So far it is only related to cleanup of the kernel and I expect the same will be true with the add of new kernels.

Not ideal but I haven't found the root of the problem with the 10_linux_zfs script.

Revision history for this message
Rex Tsai (chihchun) wrote :

I believe the invalid grub menu is caused by wrong "}", when there is only a main section.

Revision history for this message
jpb (jbrown10) wrote : Re: [Bug 1848856] Re: Upgrade from 19.04 to 19.10 with zfs on root fails with grub syntax error
Download full text (3.2 KiB)

Hi Rex - you essentially removed the line that set at_least_one_entry=1.
Yes, that removes an extra "}" but if you look at the generated script, the
linux line specifies no kernel image and the initrd line includes no file.
This will result in an unbootable system.

 linux root=ZFS=rpool/ROOT/ubuntu ro intel_iommu=on iommu=pt quiet splash
intel_iommu=on iommu=pt rootdelay=3 zswap.enabled=1 zswap.compressor=lz4
zswap.zpool=z3fold ${vt_handoff}
initrd
}

So, I believe the problem is deeper than a simple syntax error, there is
content missing. Additionally, when you look at the debug output you'll
see there were other kernels installed which should have gone under a
submenu. Content is being truncated or not carried through. This might be
a dash problem -- I can't tell.

But to remove the extra "}" does not resolve the issue.
Thanks,
Jeff

On Sun, Oct 20, 2019 at 9:35 AM Rex Tsai <email address hidden> wrote:

> I believe the invalid grub menu is caused by wrong "}", when there is
> only a main section.
>
> ** Patch added: "1848856.debdiff"
>
> https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1848856/+attachment/5298611/+files/1848856.debdiff
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1848856
>
> Title:
> Upgrade from 19.04 to 19.10 with zfs on root fails with grub syntax
> error
>
> Status in grub2 package in Ubuntu:
> Confirmed
>
> Bug description:
> At the end of the upgrade from 19.04 to 19.10, the post process of the
> update-grub reports:
>
> Syntax error at line 185
> Syntax errors are detected in generated GRUB config file.
> Ensure that there are no errors in /etc/default/grub
> and /etc/grub.d/* files or please file a bug report with
> /boot/grub/grub.cfg.new file attached.
> run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code
> 1
> dpkg: error processing package linux-image-5.3.0-18-generic
> (--configure):
> installed linux-image-5.3.0-18-generic package post-installation script
> subprocess returned error exit status 1
> Errors were encountered while processing:
> friendly-recovery
> grub-efi-amd64
> grub-efi
> grub-efi-amd64-signed
> shim-signed
> linux-image-5.3.0-18-generic
>
> The system used https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04
> -Root-on-ZFS to add zfs on root to a 19.04 system.
>
> The syntax error in grub.cfg.new is an extra } on line 185. However,
> comparing the grub.cfg.new to the previously generated grub.cfg under
> 19.04, there is a significant quantity of configuration missing.
>
> Manually running update-grub generates the same error.
> /etc/default/grub is the only file changed from default installation
> to include zswap. This file was not changed prior to upgrade.
>
> The error is reported during the processing of
> /etc/grub.d/10_linux_zfs which is dated October 11. I attempted the
> upgrade on 10/18 and have done multiple updates to get the latest
> kernel and remove old kernels prior to the upgrade. So, I believe the
> problem is with one of the upgrade modules.
>
> To manage notifications about this bug go to...

Read more...

Revision history for this message
Rex Tsai (chihchun) wrote :

I have a similar setup, a ssd as a cache for the root partition. The problem is grub-probe list multiple disks for the boot folder, which caused the /etc/grub.d/10_linux_zfs script.

The 10_linux_zfs is expecting one disk, but the following commands will return several lines.
/usr/sbin/grub-probe --target=device /boot

Revision history for this message
jpb (jbrown10) wrote : Re: Upgrade from 19.04 to 19.10 with zfs on root fails with grub syntax error

True, grub-probe will produce several lines. On my system it lists my drives as my actual mirrors, followed by my logs and then the cache -- same order as zpool status rpool

grub-probe --target=device /boot
/dev/sdc1
/dev/sda1
/dev/nvme0n1p2
/dev/nvme0n1p4

I'm not seeing in 10_linux_zfs where having multiple target devices would cause the dropping of the kernel and initrd from what was being generated. It looks to me like data is missing -- lost during the echo action of get_dataset_info().

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "1848856.debdiff" seems to be a debdiff. The ubuntu-sponsors team has been subscribed to the bug report so that they can review and hopefully sponsor the debdiff. If the attachment isn't a patch, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are member of the ~ubuntu-sponsors, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issue please contact him.]

tags: added: patch
tags: added: rls-ff-incoming
Revision history for this message
jpb (jbrown10) wrote :

Problem identified. The variable intrd_device contains a \r at the end of the variable. When concatenating the variable with the subsequent variables, it effectively truncates the subsequent data in the echo return.

Preceding the if statement, I inserted the following:

    # remove the trailing \r from the variable
    initrd_device=$(echo ${initrd_device} | tr -d '\r')

This removed the trailing \r, allowing the full set of data to be returned.

I tested this fix on my own system by modifying the 10_linux_zfs script in /etc/grub.d. The grub.cfg is now successfully generated and apt/dpkg are able to perform an update-grub.

10_linux_zfs attached.

Revision history for this message
jpb (jbrown10) wrote :
Revision history for this message
Norberto Bensa (nbensa) wrote :

I tested the fix propossed by jpb in #21 and it works for me.

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

Thanks for digging into this and finding your root cause!

I'm really wondering what is causing this additional \r on that variable compared to a standard installation, with multiple kernels and initrds, which doesn't get this \r. Do you have any specific grub configuration in /etc/default/grub* or anything in /boot which can lead to this? Could be mirror-related, where grub_probe --target device returns this \r. What's the value of $initrd_device?

The reason is that I would like to add this case to the testsuite to ensure we don't regress it in the future and so, trying to find the root cause (we can mock grub_probe in our testsuite to return what you got exactly).

Revision history for this message
jpb (jbrown10) wrote :

The \r is an invisible return character, commonly returned by an echo. However, here, we are executing grub_probe --target=device /boot

The code in 10_linux_zfs:
 initrd_device=$(${grub_probe} --target=device "${boot_dir}")

The results are the same as above. For the specific entry, it is the first drive in my mirror:
/dev/sdc1 (first drive as defined in my mirror)
/dev/sda1 (second drive as defined in my mirror)
/dev/nvme0n1p2 (zfs logs)
/dev/nvme0n1p4 (zfs cache)

Notice that the results of the output from grub_probe has new lines. New lines are usually \n but may be \r\n. I don't know what grub_probe is doing and didn't look to see how it does it. But the return from there would be what's got the \r (again it isn't visible).

I have nothing special or different from the stock Ubuntu /etc/default/grub other than I added zswap entries to the /etc/default/grub and those don't affect this.

The underlying cause may be that because I have a mirrored root using zfs mirroring, multiple devices show up and may cause the \r to be returned. It's just a guess.

So, to test, you'd want grub_probe to return multiple devices and the easiest way would be to just have a mirror for the rpool.

Mathew Hodson (mhodson)
Changed in grub2 (Ubuntu):
importance: Undecided → Medium
Changed in grub2 (Ubuntu):
status: Confirmed → Triaged
assignee: nobody → Jean-Baptiste Lallement (jibel)
Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

I'll strip the \r as a safety measure but I cannot reproduce this issue.

I created a mirror on 2 devices with log and cache devices on separate partitions of a third disk like your setup and the line separator is always a new line character (0x0a) without carriage return (0x0d).

Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

Actually I think '\r' is a red herring. The problem is that if several devices are used for a pool (raid, log or cache) grub-probe returns several lines. Several are not handled properly when we generate the line of metadata used to create the menu. The patch is to take the first device returned by grub-probe.

Revision history for this message
jpb (jbrown10) wrote :

Agreed. It is a multi-line problem (which by its nature includes a '\r') not being handled. Happy to test the solution on my system.

scnaifeh (scnaifeh)
Changed in grub2 (Ubuntu Focal):
status: Triaged → Fix Released
Changed in grub2 (Ubuntu Focal):
status: Fix Released → Triaged
status: Triaged → In Progress
summary: - Upgrade from 19.04 to 19.10 with zfs on root fails with grub syntax
- error
+ zfs on root fails with grub syntax error with multidisk pools
Changed in grub2 (Ubuntu Eoan):
importance: Undecided → Medium
Changed in grubzfs-testsuite (Ubuntu Eoan):
importance: Undecided → Medium
Changed in grubzfs-testsuite (Ubuntu Focal):
importance: Undecided → Medium
Changed in grub2 (Ubuntu Eoan):
assignee: nobody → Jean-Baptiste Lallement (jibel)
Changed in grubzfs-testsuite (Ubuntu Eoan):
assignee: nobody → Jean-Baptiste Lallement (jibel)
Changed in grubzfs-testsuite (Ubuntu Focal):
assignee: nobody → Jean-Baptiste Lallement (jibel)
Changed in grub2 (Ubuntu Eoan):
status: New → Triaged
Changed in grubzfs-testsuite (Ubuntu Eoan):
status: New → Triaged
Changed in grubzfs-testsuite (Ubuntu Focal):
status: New → Triaged
summary: - zfs on root fails with grub syntax error with multidisk pools
+ zfs on root fails with grub syntax error with multidisks pools
description: updated
Revision history for this message
Darrin Burger (aenigma1372) wrote :

I also tested the fix propossed by jpb in #21 and can confirm that it worked for me as well.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grubzfs-testsuite - 0.4.6

---------------
grubzfs-testsuite (0.4.6) focal; urgency=medium

  [ Jean-Baptiste Lallement ]
  [ Didier Roche ]
  * Test cases for:
    - Handle the case where grub-probe returns several devices for a single
      pool (LP: #1848856).
    - Do not crash on invalid fstab and report the invalid entry.
      (LP: #1849347)
    - When a pool fails to import, catch and display the error message and
      continue with other pools. Import all the pools in readonly mode so we
      can import other pools with unsupported features (LP: #1848399)

 -- Jean-Baptiste Lallement <email address hidden> Mon, 18 Nov 2019 11:38:20 +0100

Changed in grubzfs-testsuite (Ubuntu Focal):
status: Triaged → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub2 - 2.04-1ubuntu14

---------------
grub2 (2.04-1ubuntu14) focal; urgency=medium

  * debian/patches/ubuntu-zfs-enhance-support.patch:
    - Handle the case where grub-probe returns several devices for a single
      pool (LP: #1848856). Thanks jpb for the report and the proposed patch.
    - Add savedefault to non-recovery entries (LP: #1850202). Thanks Deltik
      for the patch.
    - Do not crash on invalid fstab and report the invalid entry.
      (LP: #1849347) Thanks Deltik for the patch.
    - When a pool fails to import, catch and display the error message and
      continue with other pools. Import all the pools in readonly mode so we
      can import other pools with unsupported features (LP: #1848399) Thanks
      satmandu for the investigation and the proposed patch

 -- Jean-Baptiste Lallement <email address hidden> Mon, 18 Nov 2019 11:22:43 +0100

Changed in grub2 (Ubuntu Focal):
status: In Progress → Fix Released
Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

debdiff to backport the fixes from focal to eoan

Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :
Revision history for this message
Norberto Bensa (nbensa) wrote :

What is needed for a fix for 19.10?

Revision history for this message
isden (isden) wrote :

> What is needed for a fix for 19.10?

The patch to 10_linux_zfs from "grub2_2.04-1ubuntu12.1_1ubuntu12.2.debdiff" seems to be working fine on 19.10.

Revision history for this message
Norberto Bensa (nbensa) wrote :

I know it works. jpb propossed the first patch, I tested (#22)

But I'm waiting for is an "official" fix :-)

Revision history for this message
Brian Elliott Finley (finley) wrote :

On Ubuntu 19.10, with ZFS root and zfs-auto-snapshot, /etc/grub.d/10_linux_zfs was scanning every single snapshot whenever update-grub was invoked.

This is a simple patch that just exits the get_dataset_info() function if the dataset is a snapshot. I could be missing something, but I don't expect we want to be finding kernel & initrd images on snapshots under normal circumstances.

Revision history for this message
Jean-Baptiste Lallement (jibel) wrote :

@finley, this is a different issue. It'll be easier to track in a separate report. Could you file on please?

Thanks.

Revision history for this message
Jade Sailor (jsailor) wrote :

@jibel, any chance of an SRU for this?

Revision history for this message
Simon Quigley (tsimonq2) wrote :

19.10 is EOL, unsubscribed sponsors and removed the series.

Please resubscribe ~ubuntu-sponsors if a patch is still needed.

no longer affects: grub2 (Ubuntu Eoan)
no longer affects: grubzfs-testsuite (Ubuntu Eoan)
Revision history for this message
piersh (piersh) wrote :

this problem still exists in 20.04

Revision history for this message
Ryan C. Underwood (nemesis-icequake) wrote :

Problem still exists on 23.10.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.