Rook mgr module crashes due to missing mgr.nfs

Bug #2003530 reported by Peter Sabaini
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
ceph (Ubuntu)
Fix Released
High
Unassigned
Jammy
Fix Released
High
Unassigned
Kinetic
Fix Released
High
Unassigned
Lunar
Fix Released
High
Unassigned

Bug Description

[Impact]

The rook mgr service crashes on installing the ceph-mgr-rook package (see below traceback from /var/log/syslog). This is due to a missing ceph mgr package "nfs" which the rook mgr module depends upon.

This makes the rook mgr module unusable which is required for integrating Ceph with the Rook storage orchestrator.

The proposed patch fixes this by including the nfs mgr package into the ceph-mgr-modules-core .deb. This is similar as upstream packages nfs for the ceph mgr system.

Jan 17 16:39:18 devcontainer-269785 bash[247610]: debug 2023-01-17T16:39:18.008+0000 7f930419fdc0 -1 mgr[py] Module not found: 'rook'
Jan 17 16:39:18 devcontainer-269785 bash[247610]: debug 2023-01-17T16:39:18.008+0000 7f930419fdc0 -1 mgr[py] Traceback (most recent call last):
Jan 17 16:39:18 devcontainer-269785 bash[247610]: File "/usr/share/ceph/mgr/rook/__init__.py", line 5, in <module>
Jan 17 16:39:18 devcontainer-269785 bash[247610]: from .module import RookOrchestrator
Jan 17 16:39:18 devcontainer-269785 bash[247610]: File "/usr/share/ceph/mgr/rook/module.py", line 41, in <module>
Jan 17 16:39:18 devcontainer-269785 bash[247610]: from .rook_cluster import RookCluster
Jan 17 16:39:18 devcontainer-269785 bash[247610]: File "/usr/share/ceph/mgr/rook/rook_cluster.py", line 29, in <module>
Jan 17 16:39:18 devcontainer-269785 bash[247610]: from nfs.cluster import create_ganesha_pool
Jan 17 16:39:18 devcontainer-269785 bash[247610]: ModuleNotFoundError: No module named 'nfs'

[Test plan]

The test requires a Ceph cluster. SSH to a system with a running ceph-mon service.

$ sudo ceph mgr module ls # verify: no rook mgr module
$ sudo apt-get -q install ceph-mgr-rook
$ sudo ceph -s # verify: no crashed modules
$ sudo ceph mgr module ls # verify: rook mgr module present and enabled

[Where problems could occur]

The proposed patch only includes an additional Python package, and regression potential should be low.

Issues could occur due to packaging bugs, such as missing dependencies for the nfs mgr package. As the nfs package is currently missing, there should not be any additional impact due to this.

Related branches

James Page (james-page)
Changed in ceph (Ubuntu Jammy):
status: New → Triaged
Changed in ceph (Ubuntu Kinetic):
status: New → Triaged
Changed in ceph (Ubuntu Lunar):
status: New → Triaged
Changed in ceph (Ubuntu Jammy):
importance: Undecided → High
Changed in ceph (Ubuntu Kinetic):
importance: Undecided → High
Changed in ceph (Ubuntu Lunar):
importance: Undecided → High
description: updated
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package ceph - 17.2.5-0ubuntu1

---------------
ceph (17.2.5-0ubuntu1) lunar; urgency=medium

  * New upstream point release (LP: #1998958).
  * d/p/py310-py-ssize-t-compat.patch,
    d/p/dc69033763cc116c6ccdf1f97149a74248691042.patch,
    d/p/a124f6c47b119a8741f347ea5a809f3fb48d6679.patch:
    Dropped, included upstream.
  * d/p/patch-out-exporter.patch: Don't build the node exporter as it
    fails on Jammy.

 -- Luciano Lo Giudice <email address hidden> Tue, 21 Feb 2023 16:36:30 +0000

Changed in ceph (Ubuntu Lunar):
status: Triaged → Fix Released
Revision history for this message
Steve Langasek (vorlon) wrote :

I have confirmed /usr/share/ceph/mgr/nfs is not shipped in any other packages in jammy or kinetic, so no Breaks/Replaces or other migration is needed.

Changed in ceph (Ubuntu Kinetic):
status: Triaged → Fix Committed
tags: added: verification-needed verification-needed-kinetic
Revision history for this message
Steve Langasek (vorlon) wrote : Please test proposed package

Hello Peter, or anyone else affected,

Accepted ceph into kinetic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/ceph/17.2.6-0ubuntu0.22.10.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-kinetic to verification-done-kinetic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-kinetic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in ceph (Ubuntu Jammy):
status: Triaged → Fix Committed
tags: added: verification-needed-jammy
Revision history for this message
Steve Langasek (vorlon) wrote :

Hello Peter, or anyone else affected,

Accepted ceph into jammy-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/ceph/17.2.6-0ubuntu0.22.04.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-jammy to verification-done-jammy. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-jammy. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Revision history for this message
Luciano Lo Giudice (lmlogiudice) wrote :

Verification for jammy-yoga:

2023-06-11 13:14:49.945748 | focal-medium | ======

2023-06-11 13:14:49.945821 | focal-medium | Totals

2023-06-11 13:14:49.945861 | focal-medium | ======

2023-06-11 13:14:49.945882 | focal-medium | Ran: 179 tests in 599.9393 sec.

2023-06-11 13:14:49.945894 | focal-medium | - Passed: 167

2023-06-11 13:14:49.945909 | focal-medium | - Skipped: 12

2023-06-11 13:14:49.945919 | focal-medium | - Expected Fail: 0

2023-06-11 13:14:49.945929 | focal-medium | - Unexpected Success: 0

2023-06-11 13:14:49.945939 | focal-medium | - Failed: 0

2023-06-11 13:14:49.945949 | focal-medium | Sum of execute time for each test: 1275.4787 sec.

2023-06-11 13:14:49.945973 | focal-medium |

logs can be seen at: http://10.245.164.110/t/openstack/build/7c9433263be74a018965e3980721007a/log/job-output.txt

Revision history for this message
Luciano Lo Giudice (lmlogiudice) wrote :

Verification for kinetic-zed:

2023-06-18 15:28:06.163590 | focal-medium |

2023-06-18 15:28:06.163688 | focal-medium | ======

2023-06-18 15:28:06.163706 | focal-medium | Totals

2023-06-18 15:28:06.163718 | focal-medium | ======

2023-06-18 15:28:06.163730 | focal-medium | Ran: 179 tests in 331.8641 sec.

2023-06-18 15:28:06.163756 | focal-medium | - Passed: 167

2023-06-18 15:28:06.163770 | focal-medium | - Skipped: 12

2023-06-18 15:28:06.163782 | focal-medium | - Expected Fail: 0

2023-06-18 15:28:06.163794 | focal-medium | - Unexpected Success: 0

2023-06-18 15:28:06.163806 | focal-medium | - Failed: 0

2023-06-18 15:28:06.163817 | focal-medium | Sum of execute time for each test: 716.6778 sec.

logs can be seen at: http://10.245.164.110/t/openstack/build/59b4161bf3834ea8987e6ec0e7ef7722/log/job-output.txt

tags: added: verification-done verification-done-jammy verification-done-kinetic
removed: verification-needed verification-needed-jammy verification-needed-kinetic
Revision history for this message
Brian Murray (brian-murray) wrote :

Was the test case in the bug description followed to verify that the fix worked? I didn't see anything about rook in one of the log files provided.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

As Brian said, this is missing execution of the test plan, and the package is thus blocked from releasing into updates.

Revision history for this message
Luciano Lo Giudice (lmlogiudice) wrote :

Hello everyone,

I've run the following test plan to confirm that the fix addresses this issue:

First, I've created a small ceph cluster with 3 mons and 3 osd's using Ubuntu Jammy images and the folllowing PPA that includes the fix: ppa:lmlogiudice/ceph-jammy-17.2.6

With that in place, we can ssh into one of the monitors and run the following:

sudo apt install ceph-mgr-rook

afterwards, fetching the status via: "sudo ceph status" will show that no modules have crashed and the command "sudo ceph mgr module ls" will show that the rook mgr module is present and enabled.

In addition, there will be a directory under "/usr/share/ceph/mgr/nfs/" with all the needed bits. The absence of those files was causing the issue reported in this bug.

The same procedure can be run for Ubuntu Kinetic using the ppa: "ppa:lmlogiudice/ceph-kinetic-17.2.6", and I've likewise found no errors.

Revision history for this message
Luciano Lo Giudice (lmlogiudice) wrote :

here's the output of the above commands:

sudo ceph mgr module ls:

MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
rook on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -

sudo ceph status:

  cluster:
    id: b37a0c92-19c5-11ee-b1f5-8b04eccdb717
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum juju-01d100-1,juju-01d100-0,juju-01d100-2 (age 23m)
    mgr: juju-01d100-1(active, since 4m), standbys: juju-01d100-0, juju-01d100-2
    osd: 3 osds: 3 up (since 15m), 3 in (since 15m)

  data:
    pools: 1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage: 62 MiB used, 5.9 GiB / 6.0 GiB avail
    pgs: 1 active+clean

Output is identical on Kinetic

Revision history for this message
Chris Halse Rogers (raof) wrote :

@Luciano: Since (as I understand it) the bug is triggered when the package is installed, it seems the initial step of the test plan is important:
 $ sudo ceph mgr module ls # verify: no rook mgr module
Did you do this?

Also, testing on a PPA is insufficient for SRUs. We need the packages that we're going to be releasing to be tested - that means the packages in -proposed.

tags: added: verification-needed verification-needed-jammy verification-needed-kinetic
removed: verification-done verification-done-jammy verification-done-kinetic
Revision history for this message
Alan Baghumian (alanbach) wrote :
Download full text (4.2 KiB)

Hello There!

I have a cluster running the proposed packages since they have been released:

$ dpkg -l | grep 17.2.6 | awk '{print $2"\t\t"$3"\t\t"$4}'
ceph 17.2.6-0ubuntu0.22.04.1 amd64
ceph-base 17.2.6-0ubuntu0.22.04.1 amd64
ceph-common 17.2.6-0ubuntu0.22.04.1 amd64
ceph-mds 17.2.6-0ubuntu0.22.04.1 amd64
ceph-mgr 17.2.6-0ubuntu0.22.04.1 amd64
ceph-mgr-modules-core 17.2.6-0ubuntu0.22.04.1 all
ceph-mon 17.2.6-0ubuntu0.22.04.1 amd64
ceph-osd 17.2.6-0ubuntu0.22.04.1 amd64
ceph-volume 17.2.6-0ubuntu0.22.04.1 all
libcephfs2 17.2.6-0ubuntu0.22.04.1 amd64
librados2 17.2.6-0ubuntu0.22.04.1 amd64
libradosstriper1 17.2.6-0ubuntu0.22.04.1 amd64
librbd1 17.2.6-0ubuntu0.22.04.1 amd64
librgw2 17.2.6-0ubuntu0.22.04.1 amd64
libsqlite3-mod-ceph 17.2.6-0ubuntu0.22.04.1 amd64
python3-ceph-argparse 17.2.6-0ubuntu0.22.04.1 amd64
python3-ceph-common 17.2.6-0ubuntu0.22.04.1 all
python3-cephfs 17.2.6-0ubuntu0.22.04.1 amd64
python3-rados 17.2.6-0ubuntu0.22.04.1 amd64
python3-rbd 17.2.6-0ubuntu0.22.04.1 amd64
radosgw 17.2.6-0ubuntu0.22.04.1 amd64

$ sudo ceph mgr module ls
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -

$ sudo ceph -s
  cluster:
    id: 6c2efd86-7423-11ed-97ec-2f3ef93079f7
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum juju-b096f0-88-lxd-0,juju-b096f0-90-lxd-0,juju-b096f0-92-lxd-0 (age 10h)
    mgr: juju-b096f0-88-lxd-0(active, since 4d), standbys: juju-b096f0-92-lxd-0, juju-b096f0-90-lxd-0
    osd: 8 osds: 8 up (since 2d), 8 in (since 2w)

  data:
    pools: 3 pools, 289 pgs
    objects: 169.40k objects, 492 GiB
    usage: 1.5 TiB used, 892 GiB / 2.3 TiB avail
    pgs: 289 active+clean

Installed ceph-mgr-rook on all Mon units:

$ juju run -a ceph-mon-ssd 'sudo apt-get -y install ceph-mgr-rook'

Check cluster Status:

$ sudo ceph -s
  cluster:
    id: 6c2efd86-7423-11ed-97ec-2f3ef93079f7
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum juju-b096f0-88-lxd-0,juju-b096f0-90-lxd-0,juju-b096f0-92-lxd-0 (age 10h)
    mgr: juju-b096f0-88-lxd-0(active, since 2m), standbys: juju-b096f0-90-lxd-0, juju-b096f0-92-lxd-0
    osd: 8 osds: 8 up (since 2d), 8 in (since 2w)

  data:
    pools: 3 pools, 289 pgs
    objects: 169.40k objects, 492 GiB
    usage: 1.5 TiB used, 892 GiB / 2.3 TiB avail
 ...

Read more...

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

Hi @alanbach,

thanks for the information you provided. It's certainly helpful to know you have a jammy installation running with the new packages without problems, but we still need the [test plan] execution from this bug's description to be run, in jammy and kinetic, as was agreed when the package was accepted into proposed.

Revision history for this message
Luciano Lo Giudice (lmlogiudice) wrote :
Download full text (4.4 KiB)

Hello everyone, and sorry about the noise before. Here's a test plan that I think should satisfy all the requirements.

First, we create a small ceph cluster. I used juju to add some machines with the following commands:

'juju add-machine --series=jammy'

Afterwards, we ssh into the target machines and added the -proposed archives:

$ cat /etc/apt/sources.list.d/ubuntu-jammy-proposed.list
deb http://archive.ubuntu.com/ubuntu jammy-proposed main multiverse restricted universe

We can verify that the 17.2.6 is going to be installed by running:

$ apt-cache policy ceph
ceph:
  Installed: 17.2.6-0ubuntu0.22.04.1
  Candidate: 17.2.6-0ubuntu0.22.04.1
  Version table:
 *** 17.2.6-0ubuntu0.22.04.1 500
        500 http://archive.ubuntu.com/ubuntu jammy-proposed/main amd64 Packages
        100 /var/lib/dpkg/status
     17.2.5-0ubuntu0.22.04.3 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
        500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
     17.1.0-0ubuntu3 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy/main amd64 Packages

With that in place, we deploy a small ceph cluster. I used 3 mons and 3 osd's, but anything should work.

Once the cluster has been deployed we once again ssh into one of the target machines (in this case, one of the mons).

As a precaution, we can test that ceph is running the proposed package:

$ ceph-mon -v
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)

Now, we ensure that rook-mgr isn't installed or running:

$ sudo ceph mgr module ls

MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -

Then, we install the rook-mgr module by hand:

$ sudo apt install ceph-mgr-rook

Next, we enable the module:

$ sudo ceph mgr module enable rook

We then check that the ceph cluster is in a healthy state and no modules have crashed by running:

ubuntu@juju-9f12b1-ceph-0:~$ sudo ceph -s
  cluster:
    id: 026e3f56-1f5e-11ee-bf28-95ba6942eafd
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum juju-9f12b1-ceph-1,juju-9f12b1-ceph-2,juju-9f12b1-ceph-0 (age 9m)
    mgr: juju-9f12b1-ceph-2(active, since 6m), standbys: juju-9f12b1-ceph-1, juju-9f12b1-ceph-0
    osd: 3 osds: 3 up, 3 in

  data:
    pools: 0 pools, 0 pgs
    objects: ...

Read more...

Revision history for this message
Luciano Lo Giudice (lmlogiudice) wrote :
Download full text (3.8 KiB)

Here's the test plan for Kinetic:

First, deploy a small ceph cluster. If using Juju, we can use something like:

'juju add-machine --series="kinetic"'

Add the -proposed archives. For every target machine, there must exist the following file with the contents:

$ cat /etc/apt/sources.list.d/ubuntu-kinetic-proposed.list
deb http://archive.ubuntu.com/ubuntu kinetic-proposed main multiverse restricted universe

Verify that the version to test is the one that is going to be installed:

$ apt-cache policy ceph
ceph:
  Installed: (none)
  Candidate: 17.2.6-0ubuntu0.22.10.1
  Version table:
     17.2.6-0ubuntu0.22.10.1 500
        500 http://archive.ubuntu.com/ubuntu kinetic-proposed/main amd64 Packages
     17.2.5-0ubuntu0.22.10.3 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu kinetic-updates/main amd64 Packages
        500 http://security.ubuntu.com/ubuntu kinetic-security/main amd64 Packages
     17.2.0-0ubuntu4 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu kinetic/main amd64 Packages

Once the ceph cluster has been deployed successfully, we can ssh into one of the mons and test the rook module.

First, we verify that the rook module is not yet running:

$ sudo ceph mgr module ls

MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -

Then, we install and enable the module:

$ sudo apt install ceph-mgr-rook
$ sudo ceph mgr module enable rook

Verify that the cluster is healthy:

  cluster:
    id: c3ab9238-1f66-11ee-9277-31985965425a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum juju-233a7d-ceph-kinetic-0,juju-233a7d-ceph-kinetic-2,juju-233a7d-ceph-kinetic-1 (age 4m)
    mgr: juju-233a7d-ceph-kinetic-2(active, since 13s), standbys: juju-233a7d-ceph-kinetic-1, juju-233a7d-ceph-kinetic-0
    osd: 3 osds: 3 up, 3 in

  data:
    pools: 0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage: 0 B used, 0 B / 0 B avail
    pgs:

Lastly, check that the rook module is up and running:

$ sudo ceph mgr module ls

MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry ...

Read more...

tags: added: verification-done-jammy verification-done-kinetic
removed: verification-needed verification-needed-jammy verification-needed-kinetic
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package ceph - 17.2.6-0ubuntu0.22.10.1

---------------
ceph (17.2.6-0ubuntu0.22.10.1) kinetic; urgency=medium

  [ Luciano Lo Giudice ]
  * New upstream point release (LP: #2018929).
  * d/p/*: Refresh.

  [ Peter Sabaini ]
  * Fix: add the mgr.nfs package to the core modules (LP: #2003530).

  [ James Page ]
  * d/p/CVE-2022-*: Drop security related patches, included in
    17.2.6 point release.
  * d/p/32bit-fixes.patch: rework size_t usage to avoid FTBFS on 32 bit
    architectures.

 -- Luciano Lo Giudice <email address hidden> Fri, 26 May 2023 15:41:23 +0100

Changed in ceph (Ubuntu Kinetic):
status: Fix Committed → Fix Released
Revision history for this message
Łukasz Zemczak (sil2100) wrote : Update Released

The verification of the Stable Release Update for ceph has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package ceph - 17.2.6-0ubuntu0.22.04.1

---------------
ceph (17.2.6-0ubuntu0.22.04.1) jammy; urgency=medium

  [ Luciano Lo Giudice ]
  * New upstream point release (LP: #2018929).
  * d/p/*: Refresh.

  [ Peter Sabaini ]
  * Fix: add the mgr.nfs package to the core modules (LP: #2003530).

  [ James Page ]
  * d/p/32bit-fixes.patch: rework size_t usage to avoid FTBFS on 32 bit
    architectures.
  * d/p/CVE-2022-*: Drop security related patches, included in release.

 -- Luciano Lo Giudice <email address hidden> Fri, 26 May 2023 15:42:09 +0100

Changed in ceph (Ubuntu Jammy):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.