remove-unit doesn't take OSDs down
Bug #1629679 reported by
James Troup
This bug affects 9 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Triaged
|
Wishlist
|
Unassigned | ||
OpenStack Ceph Charm (Retired) |
Won't Fix
|
Medium
|
Unassigned | ||
ceph (Juju Charms Collection) |
Invalid
|
Medium
|
Unassigned | ||
ceph-osd (Juju Charms Collection) |
Invalid
|
Medium
|
Unassigned |
Bug Description
I ran 'juju destroy ceph-osd/0' and expected this to take the OSDs on
ceph-osd/0 down and out. It didn't; even when ceph-osd/0 was
completely gone, the OSDs were still up and running.
Given ceph is currently run almost exclusively on bare metal (and
therefor entirely possibly not the last unit on the machine), I think
it would make sense for the charms to not assume the machine hosting
the unit is going away and instead explicitly take down the OSDs. And
perhaps stop them from coming back up again to avoid epic confusion on
reboot?
tags: | added: canonical-is |
tags: | added: canonical-bootstack |
description: | updated |
Changed in charm-ceph: | |
importance: | Undecided → Medium |
status: | New → Triaged |
Changed in ceph (Juju Charms Collection): | |
status: | Triaged → Invalid |
Changed in charm-ceph-osd: | |
importance: | Undecided → Medium |
status: | New → Triaged |
Changed in ceph-osd (Juju Charms Collection): | |
status: | Triaged → Invalid |
Changed in charm-ceph: | |
status: | Triaged → Won't Fix |
tags: | added: scaleback |
Changed in charm-ceph-osd: | |
status: | New → Triaged |
importance: | Medium → Wishlist |
To post a comment you must log in.
FAOD this is not unit specific; I just did 'juju destroy-service ceph-osd' and still have a functional ceph cluster.