Upgrade from Nautilus to Octopus does not restart services, leaving them running Nautilus versions
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph Monitor Charm |
Triaged
|
Medium
|
Unassigned | ||
OpenStack Ceph-FS Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
$ juju export-bundle >> ~/juju_
$ juju status >> ~/juju_
$ juju run -u ceph-mon/leader -- sudo ceph -s && juju run -u ceph-mon/leader -- sudo ceph versions
cluster:
id: f2b72582-
health: HEALTH_OK
services:
mon: 1 daemons, quorum juju-0d931d-ck-3 (age 66m)
mgr: juju-0d931d-
mds: ceph-fs:1 {0=juju-
osd: 3 osds: 3 up (since 65m), 3 in (since 65m)
task status:
scrub status:
data:
pools: 2 pools, 16 pgs
objects: 22 objects, 2.2 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 16 active+clean
{
"mon": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mgr": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"osd": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mds": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"overall": {
"ceph version 14.2.18 (befbc92f3c11ee
}
}
$ juju upgrade-charm ceph-mon --revision 58
Added charm-store charm "ceph-mon", revision 58 in channel stable, to the model
Leaving endpoints in "alpha": admin, bootstrap-source, client, cluster, mds, mon, nrpe-external-
$ juju run -u ceph-mon/leader -- sudo ceph -s && juju run -u ceph-mon/leader -- sudo ceph versions
cluster:
id: f2b72582-
health: HEALTH_OK
services:
mon: 1 daemons, quorum juju-0d931d-ck-3 (age 77m)
mgr: juju-0d931d-
mds: ceph-fs:1 {0=juju-
osd: 3 osds: 3 up (since 76m), 3 in (since 76m)
task status:
scrub status:
data:
pools: 2 pools, 16 pgs
objects: 22 objects, 2.2 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 16 active+clean
{
"mon": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mgr": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"osd": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mds": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"overall": {
"ceph version 14.2.18 (befbc92f3c11ee
}
}
$ juju config ceph-mon source=
$ juju run -u ceph-mon/leader -- sudo ceph -s && juju run -u ceph-mon/leader -- sudo ceph versions
cluster:
id: f2b72582-
health: HEALTH_OK
services:
mon: 1 daemons, quorum juju-0d931d-ck-3 (age 85m)
mgr: juju-0d931d-
mds: ceph-fs:1 {0=juju-
osd: 3 osds: 3 up (since 84m), 3 in (since 84m)
task status:
scrub status:
data:
pools: 2 pools, 16 pgs
objects: 22 objects, 2.2 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 16 active+clean
{
"mon": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mgr": {
"ceph version 15.2.13 (c44bc49e7a57a8
},
"osd": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mds": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"overall": {
"ceph version 14.2.18 (befbc92f3c11ee
"ceph version 15.2.13 (c44bc49e7a57a8
}
}
$ juju status >> ~/juju_
$ juju export-bundle >> ~/juju_
### Note:
The 'mon' version still reports 14.2.18 where the mgr version reports 15.2.13.
[1] https:/
[2] https:/
summary: |
- Upgrade from Nautilus to Octopus does not restart services, leaving the - versions running Nautilus versions + Upgrade from Nautilus to Octopus does not restart services, leaving them + running Nautilus versions |
tags: | added: openstack-upgrade |
Changed in charm-ceph-mon: | |
importance: | Undecided → Medium |
Changed in charm-ceph-fs: | |
importance: | Undecided → Medium |
Changed in charm-ceph-mon: | |
status: | New → Triaged |
Changed in charm-ceph-fs: | |
status: | New → Triaged |
Manually restarting ceph-mon causes the new version to be running: 1703-11ec- 82f3-fa163e15a8 b3
$ juju run -u ceph-mon/0 -- sudo systemctl restart ceph-mon.target
$ juju run -u ceph-mon/leader -- sudo ceph -s && juju run -u ceph-mon/leader -- sudo ceph versions
cluster:
id: f2b72582-
health: HEALTH_WARN
client is using insecure global_id reclaim
mon is allowing insecure global_id reclaim
2 pools have too few placement groups
services: ck-3(active, since 108s) 0d931d- ck-1=up: active} 2 up:standby
mon: 1 daemons, quorum juju-0d931d-ck-3 (age 114s)
mgr: juju-0d931d-
mds: ceph-fs:1 {0=juju-
osd: 3 osds: 3 up (since 99m), 3 in (since 99m)
task status:
mds.juju- 0d931d- ck-1: idle
scrub status:
data:
pools: 3 pools, 17 pgs
objects: 22 objects, 2.7 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 17 active+clean
io:
client: 170 B/s wr, 0 op/s rd, 0 op/s wr
{ 7d84dfff2a077a2 058aa2172e2) octopus (stable)": 1 7d84dfff2a077a2 058aa2172e2) octopus (stable)": 1 dd8626487211d20 0c0b44786d9) nautilus (stable)": 3 dd8626487211d20 0c0b44786d9) nautilus (stable)": 3 dd8626487211d20 0c0b44786d9) nautilus (stable)": 6, 7d84dfff2a077a2 058aa2172e2) octopus (stable)": 2
"mon": {
"ceph version 15.2.13 (c44bc49e7a57a8
},
"mgr": {
"ceph version 15.2.13 (c44bc49e7a57a8
},
"osd": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"mds": {
"ceph version 14.2.18 (befbc92f3c11ee
},
"overall": {
"ceph version 14.2.18 (befbc92f3c11ee
"ceph version 15.2.13 (c44bc49e7a57a8
}
}