A small Ceph cluster with 3 OSD's deployed and did the following to verify. 1) ran the following script to get initial versions: #!/bin/bash for unit in ceph-mon/0 ceph-osd/0 ceph-osd/1 ceph-osd/2; do echo "Versions on ${unit}:" juju ssh ${unit} sudo 'dpkg -l | grep ceph' echo "lsb_release on ${unit}:" juju ssh ${unit} lsb_release -a done Its output is: Versions on ceph-mon/0: ii ceph 15.2.3-0ubuntu0.20.04.1 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.1 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.1 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.1 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.1 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.1 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.1 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 libraries for the Ceph libcephfs library lsb_release on ceph-mon/0: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Versions on ceph-osd/0: ii ceph 15.2.3-0ubuntu0.20.04.1 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.1 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.1 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.1 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.1 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.1 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.1 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 libraries for the Ceph libcephfs library lsb_release on ceph-osd/0: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Versions on ceph-osd/1: ii ceph 15.2.3-0ubuntu0.20.04.1 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.1 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.1 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.1 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.1 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.1 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.1 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 libraries for the Ceph libcephfs library lsb_release on ceph-osd/1: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Versions on ceph-osd/2: ii ceph 15.2.3-0ubuntu0.20.04.1 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.1 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.1 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.1 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.1 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.1 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.1 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.1 amd64 Python 3 libraries for the Ceph libcephfs library lsb_release on ceph-osd/2: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal 2) Checked the cluster is healthy and in quorum: $ juju ssh ceph-mon/0 sudo -s cephroot@juju-6d1caa-ceph15-9-20-0:/home/ubuntu# ceph -s cluster: id: 8355544e-f76e-11ea-9385-3be64722e92b health: HEALTH_OK services: mon: 1 daemons, quorum juju-6d1caa-ceph15-9-20-0 (age 12m) mgr: juju-6d1caa-ceph15-9-20-0(active, since 11m) osd: 3 osds: 3 up (since 9m), 3 in (since 9m) data: pools: 2 pools, 129 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs: 129 active+clean 3) Then added the proposed and all 4 units: sudo sh -c "echo 'deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -cs)-proposed restricted main multiverse universe' >> /etc/apt/sources.list.d/proposed-repositories.list" sudo apt update sudo apt -t $(lsb_release -cs)-proposed install ceph -y Check the package version on ceph-mon/0 unit after upgrade: $ dpkg -l | grep ceph ii ceph 15.2.3-0ubuntu0.20.04.2 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.2 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.2 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.2 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.2 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.2 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.2 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.2 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.2 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.2 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 libraries for the Ceph libcephfs library 4) Restart ceph-mon and ceph-mgr first on ceph-mon/0 with: $ sudo systemctl restart ceph-mon.target && sudo systemctl restart ceph-mgr.target 5) Check that ceph-mon and ceph-mgr are back running: $ systemctl status ceph-mon.target ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once Loaded: loaded (/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled) Active: active since Tue 2020-09-15 17:24:21 UTC; 1min 18s ago Sep 15 17:24:21 juju-6d1caa-ceph15-9-20-0 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mon@.service instances at once. $ systemctl status ceph-mgr.target ● ceph-mgr.target - ceph target allowing to start/stop all ceph-mgr@.service instances at once Loaded: loaded (/lib/systemd/system/ceph-mgr.target; enabled; vendor preset: enabled) Active: active since Tue 2020-09-15 17:24:21 UTC; 1min 23s ago Sep 15 17:24:21 juju-6d1caa-ceph15-9-20-0 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mgr@.service instances at once. 6) Check cluster state is OK: $ ceph -s cluster: id: 8355544e-f76e-11ea-9385-3be64722e92b health: HEALTH_OK services: mon: 1 daemons, quorum juju-6d1caa-ceph15-9-20-0 (age 3m) mgr: juju-6d1caa-ceph15-9-20-0(active, since 3m) osd: 3 osds: 3 up (since 7m), 3 in (since 71m) data: pools: 2 pools, 129 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs: 129 active+clean 7) Set the 'noout' flag to avoid OSDs dropping out of the cluster: $ juju run-action --wait ceph-mon/leader set-noout unit-ceph-mon-0: UnitId: ceph-mon/0 id: "1" results: Stderr: | noout is set message: Ceph osd noout has been set status: completed timing: completed: 2020-09-15 17:34:35 +0000 UTC enqueued: 2020-09-15 17:34:33 +0000 UTC started: 2020-09-15 17:34:33 +0000 UTC 8) Then repeat the upgrade process on all 3 OSDs by repeating step (3) on all OSDs. 9) Check the versions of packages on all OSDs: $ $ for osd in {0..2}; do juju ssh ceph-osd/$osd 'sudo dpkg -l | grep ceph; lsb_release -a'; done [14/332] ii ceph 15.2.3-0ubuntu0.20.04.2 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.2 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.2 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.2 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.2 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.2 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.2 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.2 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.2 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.2 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 libraries for the Ceph libcephfs library No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Connection to 10.5.3.60 closed. ii ceph 15.2.3-0ubuntu0.20.04.2 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.2 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.2 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.2 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.2 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.2 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.2 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.2 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.2 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.2 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 libraries for the Ceph libcephfs library No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Connection to 10.5.0.152 closed. i ceph 15.2.3-0ubuntu0.20.04.2 amd64 distributed storage and file system ii ceph-base 15.2.3-0ubuntu0.20.04.2 amd64 common ceph daemon libraries and management tools ii ceph-common 15.2.3-0ubuntu0.20.04.2 amd64 common utilities to mount and interact with a ceph storage cluster ii ceph-mds 15.2.3-0ubuntu0.20.04.2 amd64 metadata server for the ceph distributed file system ii ceph-mgr 15.2.3-0ubuntu0.20.04.2 amd64 manager for the ceph distributed file system ii ceph-mgr-modules-core 15.2.3-0ubuntu0.20.04.2 all ceph manager modules which are always enabled ii ceph-mon 15.2.3-0ubuntu0.20.04.2 amd64 monitor server for the ceph storage system ii ceph-osd 15.2.3-0ubuntu0.20.04.2 amd64 OSD server for the ceph storage system ii libcephfs2 15.2.3-0ubuntu0.20.04.2 amd64 Ceph distributed file system client library ii python3-ceph-argparse 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 utility libraries for Ceph CLI ii python3-ceph-common 15.2.3-0ubuntu0.20.04.2 all Python 3 utility libraries for Ceph ii python3-cephfs 15.2.3-0ubuntu0.20.04.2 amd64 Python 3 libraries for the Ceph libcephfs library No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Connection to 10.5.1.36 closed. 10) Restart all OSDs: $ for osd in {0..2}; do juju ssh ceph-osd/$osd 'sudo systemctl restart ceph-osd.target; echo $?'; done 0 Connection to 10.5.3.60 closed. 0 Connection to 10.5.0.152 closed. 0 Connection to 10.5.1.36 closed. 11) Remove the 'noout' option: $ juju run-action --wait ceph-mon/leader unset-noout unit-ceph-mon-0: UnitId: ceph-mon/0 id: "2" results: Stderr: | noout is unset message: Ceph osd noout has been unset status: completed timing: completed: 2020-09-15 17:41:35 +0000 UTC enqueued: 2020-09-15 17:41:29 +0000 UTC started: 2020-09-15 17:41:34 +0000 UTC 12) Check the cluster state again: $ juju ssh ceph-mon/0 sudo ceph -s cluster: id: 8355544e-f76e-11ea-9385-3be64722e92b health: HEALTH_OK services: mon: 1 daemons, quorum juju-6d1caa-ceph15-9-20-0 (age 17m) mgr: juju-6d1caa-ceph15-9-20-0(active, since 17m) osd: 3 osds: 3 up (since 94s), 3 in (since 85m) data: pools: 2 pools, 129 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs: 129 active+clean Connection to 10.5.3.147 closed. 13) Check the inital value of 'osd_inject_bad_map_crc_probability' on all OSDs from ceph-mon/0: $ ceph tell osd.* config show | grep osd_inject_bad_map_crc_probability "osd_inject_bad_map_crc_probability": "0.000000", "osd_inject_bad_map_crc_probability": "0.000000", "osd_inject_bad_map_crc_probability": "0.000000", Testing part: ============= 14) Set 'osd_inject_bad_map_crc_probability' to 1 on OSD 0: $ sudo ceph tell osd.0 config set osd_inject_bad_map_crc_probability 1 { "success": "osd_inject_bad_map_crc_probability = '1.000000' (not observed, change may require restart) " } $ sudo ceph tell osd.* config show | grep osd_inject_bad_map_crc_probability "osd_inject_bad_map_crc_probability": "1.000000", "osd_inject_bad_map_crc_probability": "0.000000", "osd_inject_bad_map_crc_probability": "0.000000", 14) Log on to OSD 1 and restart the OSD service: $ systemctl restart ceph-osd.target; echo $? 0 $ systemctl status ceph-osd.target ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once Loaded: loaded (/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled) Active: active since Tue 2020-09-15 17:50:45 UTC; 8s ago Sep 15 17:50:45 juju-6d1caa-ceph15-9-20-2 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once. 15) Some inspection: a) All OSDs are running: for osd in {0..2}; do juju ssh ceph-osd/$osd 'sudo systemctl status ceph-osd.target'; done ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once Loaded: loaded (/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled) Active: active since Tue 2020-09-15 17:40:07 UTC; 13min ago Sep 15 17:40:07 juju-6d1caa-ceph15-9-20-1 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once. Connection to 10.5.3.60 closed. ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once Loaded: loaded (/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled) Active: active since Tue 2020-09-15 17:50:45 UTC; 2min 26s ago Sep 15 17:50:45 juju-6d1caa-ceph15-9-20-2 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once. Connection to 10.5.0.152 closed. ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once Loaded: loaded (/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled) Active: active since Tue 2020-09-15 17:40:10 UTC; 13min ago Sep 15 17:40:10 juju-6d1caa-ceph15-9-20-3 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once. Connection to 10.5.1.36 closed. b) No crashes reported: $ juju ssh ceph-mon/0 sudo ceph crash ls Connection to 10.5.3.147 closed. c) Cluster state is OK: $ juju ssh ceph-mon/0 sudo ceph -s cluster: id: 8355544e-f76e-11ea-9385-3be64722e92b health: HEALTH_OK services: mon: 1 daemons, quorum juju-6d1caa-ceph15-9-20-0 (age 30m) mgr: juju-6d1caa-ceph15-9-20-0(active, since 30m) osd: 3 osds: 3 up (since 4m), 3 in (since 98m) data: pools: 2 pools, 129 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs: 129 active+clean Connection to 10.5.3.147 closed. $ juju ssh ceph-mon/0 sudo ceph versions { "mon": { "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)": 1 }, "mgr": { "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)": 1 }, "osd": { "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)": 3 }, "mds": {}, "overall": { "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)": 5 } } Connection to 10.5.3.147 closed. $ juju ssh ceph-mon/0 sudo ceph report | grep version "version": "15.2.3", "crush_version": 7, "source_version": "0'0", "target_version": "0'0" "source_version": "0'0", "target_version": "0'0" "ceph_version": "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)", "ceph_version_short": "15.2.3", "distro_version": "20.04", "kernel_version": "5.4.0-42-generic", "ceph_version": "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)", "ceph_version_short": "15.2.3", "distro_version": "20.04", "kernel_version": "5.4.0-42-generic", "ceph_version": "ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)", "ceph_version_short": "15.2.3", "distro_version": "20.04", "kernel_version": "5.4.0-42-generic", "straw_calc_version": 1, "minimum_required_version": "jewel", "feature_5": "mds uses versioned encoding", Connection to 10.5.3.147 closed.