[SRU] ceph 12.2.7
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ubuntu Cloud Archive |
Invalid
|
Undecided
|
Unassigned | ||
Pike |
Won't Fix
|
Medium
|
Unassigned | ||
Queens |
Fix Released
|
Medium
|
Unassigned | ||
ceph (Ubuntu) |
Invalid
|
Undecided
|
Unassigned | ||
Bionic |
Fix Released
|
Medium
|
Unassigned |
Bug Description
[Impact]
This release sports mostly bug-fixes and we would like to make sure all of our supported customers have access to these improvements.
The update contains the following package updates:
* ceph 12.2.7
[Test Case]
The following SRU process was followed:
https:/
In order to avoid regression of existing consumers, the OpenStack team will run their continuous integration test against the packages that are in -proposed. A successful run of all available tests will be required before the proposed packages can be let into -updates.
The OpenStack team will be in charge of attaching the output summary of the executed tests. The OpenStack team members will not mark ‘verification-done’ until this has happened.
[Regression Potential]
In order to mitigate the regression potential, the results of the
aforementioned tests are attached to this bug.
[Original Bug Report]
Hi
I noticed the new ceph 12.2.7 release and no bug for it so here is a draft for it.
Notes and changelog.
https:/
[Impact]
New upstream point release 12.2.7
This is the fifth bugfix release of Luminous v12.2.x long term stable release series. This release contains a range of bug fixes across all compoenents of Ceph. We recommend all the users of 12.2.x series to update.
[Test Case]
See the changelog with individual bugs with numbers and links to them where available.
[Regression Potential]
Note again from previous released version 12.2.3:
If someone update and restart any MDS (jewel or luminous before 12.2.3 to 12.2.3 or later, then all MDS's not yet running the new version and not aware of feature bit 9 will suicide.
Operators may ignore the error messages and continue upgrading/
Reduce the number of ranks to 1 (ceph fs set <fs_name> max_mds 1), wait for all other MDS to deactivate, leaving the one active MDS, upgrade the single active MDS, then upgrade/start standbys. Finally,restore the previous max_mds.
Need good testing to make sure it is correct and stable, compared with previous version, before moving forward.
Running earlier versions cause a higher risk to encounter an issue that have already been corrected.
[Discussion]
Please correct this description when needed. Also thanks to all that are working on this.
SRU process: https:/
CVE References
description: | updated |
description: | updated |
summary: |
- [SRU] 12.2.7 + [SRU] ceph 12.2.7 |
Status changed to 'Confirmed' because the bug affects multiple users.