Cassandra node is not removed from the cluster after remove-unit action
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cassandra Juju Charm |
Won't Fix
|
Undecided
|
Unassigned |
Bug Description
I had two Cassandra units deployed:
ubuntu@
Model Controller Cloud/Region Version SLA Timestamp
cassandra orangebox-
App Version Status Scale Charm Store Rev OS Notes
cassandra active 2 cassandra jujucharms 54 ubuntu
Unit Workload Agent Machine Public address Ports Message
cassandra/0* active idle 0 172.27.86.130 9042/tcp,9160/tcp Live seed
cassandra/2 active idle 2 172.27.86.116 9042/tcp,9160/tcp Live seed
Machine State DNS Inst id Series AZ Message
0 started 172.27.86.130 24010209-
2 started 172.27.86.116 7350bc4d-
$ nodetool status
Datacenter: juju
================
Status=Up/Down
|/ State=Normal/
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.0.0.28 174.64 KiB 256 100.0% 046c1c82-
UN 10.0.0.111 247.91 KiB 256 100.0% b3f44618-
But after I did a "juju remove-unit cassandra/2", my "nodetool status" on the remaining node started to look like this:
ubuntu@
Datacenter: juju
================
Status=Up/Down
|/ State=Normal/
-- Address Load Tokens Owns (effective) Host ID Rack
DN 10.0.0.28 174.64 KiB 256 100.0% 046c1c82-
UN 10.0.0.111 237.77 KiB 256 100.0% b3f44618-
According to https:/
Probably, a relation-departed hooks might be improved to include automatic node unregistration (or, at least, let the operator be aware of a unavailable node so he could take some actions?)
Per https:/ /jaas.ai/ cassandra, 'nodes must be manually decommissioned before dropping a unit'. Per https:/ /bugs.launchpad .net/juju- core/+bug/ 1417874, it is not possible with Juju to cleanly remove the node by destroying the unit. Decomissioning a node cleanly will need to be done via an action (along with most cluster operations, turning uncontrollable magic into explicit operations under user control)