$ juju status mysql-innodb-cluster Model Controller Cloud/Region Version SLA Timestamp openstack maas-controller maas-one/default 2.9.18 unsupported 15:36:35Z App Version Status Scale Charm Store Channel Rev OS Message mysql-innodb-cluster 8.0.27 active 4 mysql-innodb-cluster charmstore stable 15 ubuntu Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE f ailure. Unit Workload Agent Machine Public address Ports Message mysql-innodb-cluster/0 active idle 0/lxd/2 10.0.0.181 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. mysql-innodb-cluster/1* active idle 1/lxd/1 10.0.0.179 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure. mysql-innodb-cluster/2 active idle 2/lxd/1 10.0.0.180 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. mysql-innodb-cluster/3 active idle 3/lxd/0 10.0.0.183 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. Machine State DNS Inst id Series AZ Message 0 started 10.0.0.172 node2 focal default Deployed 0/lxd/2 started 10.0.0.181 juju-64dabb-0-lxd-2 focal default Container started 1 started 10.0.0.173 node4 focal default Deployed 1/lxd/1 started 10.0.0.179 juju-64dabb-1-lxd-1 focal default Container started 2 started 10.0.0.174 node3 focal default Deployed 2/lxd/1 started 10.0.0.180 juju-64dabb-2-lxd-1 focal default Container started 3 started 10.0.0.182 node1 focal default Deployed 3/lxd/0 started 10.0.0.183 juju-64dabb-3-lxd-0 focal default Container started $ juju run-action --wait mysql-innodb-cluster/leader cluster-status unit-mysql-innodb-cluster-1: UnitId: mysql-innodb-cluster/1 id: "66" results: cluster-status: '{"clusterName": "jujuCluster", "defaultReplicaSet": {"name": "default", "primary": "10.0.0.179:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": {"10.0.0.179:3306": {"address": "10.0.0.179:3306", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27"}, "10.0.0.180:3306": {"address": "10.0.0.180:3306", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27"}, "10.0.0.181:3306": {"address": "10.0.0.181:3306", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27"}, "10.0.0.183:3306": {"address": "10.0.0.183:3306", "mode": "R/O", "readReplicas": {}, "replicationLag": "00:00:00.817094", "role": "HA", "status": "ONLINE", "version": "8.0.27"}}, "topologyMode": "Single-Primary"}, "groupInformationSourceMember": "10.0.0.179:3306"}' status: completed timing: completed: 2021-12-08 21:20:03 +0000 UTC enqueued: 2021-12-08 21:19:53 +0000 UTC started: 2021-12-08 21:19:57 +0000 UTC $ juju run-action --wait mysql-innodb-cluster/leader remove-instance address=10.0.0.183 unit-mysql-innodb-cluster-1: UnitId: mysql-innodb-cluster/1 id: "68" message: Remove instance failed results: output: "Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory\nThe instance will be removed from the InnoDB cluster. Depending on the instance\nbeing the Seed or not, the Metadata session might become invalid. If so, please\nstart a new session to the Metadata Storage R/W instance.\n\nInstance '10.0.0.183:3306' is attempting to leave the cluster...\n\e[31mERROR: \e[0mInstance '10.0.0.183:3306' failed to leave the cluster: Variable 'group_replication_force_members' is a non persistent variable\nTraceback (most recent call last):\n File \"\", line 3, in \nmysqlsh.DBError: MySQL Error (1238): Cluster.remove_instance: Variable 'group_replication_force_members' is a non persistent variable\n" traceback: | Traceback (most recent call last): File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/actions/remove-instance", line 299, in remove_instance output = instance.remove_instance(address, force=force) File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 986, in remove_instance raise e File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 962, in remove_instance output = self.run_mysqlsh_script(_script).decode("UTF-8") File "/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1659, in run_mysqlsh_script return subprocess.check_output(cmd, stderr=subprocess.PIPE) File "/usr/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmpxd58h7o9.py']' returned non-zero exit s tatus 1. status: failed timing: completed: 2021-12-09 15:37:44 +0000 UTC enqueued: 2021-12-09 15:37:23 +0000 UTC started: 2021-12-09 15:37:23 +0000 UTC $ juju status mysql-innodb-cluster Model Controller Cloud/Region Version SLA Timestamp openstack maas-controller maas-one/default 2.9.18 unsupported 15:39:47Z App Version Status Scale Charm Store Channel Rev OS Message mysql-innodb-cluster 8.0.27 blocked 4 mysql-innodb-cluster charmstore stable 15 ubuntu Cluster is inaccessible from this instance. Please check logs for details. Unit Workload Agent Machine Public address Ports Message mysql-innodb-cluster/0 active idle 0/lxd/2 10.0.0.181 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active. mysql-innodb-cluster/1* active idle 1/lxd/1 10.0.0.179 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active. mysql-innodb-cluster/2 active idle 2/lxd/1 10.0.0.180 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active. mysql-innodb-cluster/3 blocked idle 3/lxd/0 10.0.0.183 Cluster is inaccessible from this instance. Please check logs for details. Machine State DNS Inst id Series AZ Message 0 started 10.0.0.172 node2 focal default Deployed 0/lxd/2 started 10.0.0.181 juju-64dabb-0-lxd-2 focal default Container started 1 started 10.0.0.173 node4 focal default Deployed 1/lxd/1 started 10.0.0.179 juju-64dabb-1-lxd-1 focal default Container started 2 started 10.0.0.174 node3 focal default Deployed 2/lxd/1 started 10.0.0.180 juju-64dabb-2-lxd-1 focal default Container started 3 started 10.0.0.182 node1 focal default Deployed 3/lxd/0 started 10.0.0.183 juju-64dabb-3-lxd-0 focal default Container started $ juju run-action --wait mysql-innodb-cluster/leader cluster-status unit-mysql-innodb-cluster-1: UnitId: mysql-innodb-cluster/1 id: "70" results: cluster-status: '{"clusterName": "jujuCluster", "defaultReplicaSet": {"name": "default", "primary": "10.0.0.179:3306", "ssl": "REQUIRED", "status": "OK_PARTIAL", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active.", "topology": {"10.0.0.179:3306": {"address": "10.0.0.179:3306", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27"}, "10.0.0.180:3306": {"address": "10.0.0.180:3306", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27"}, "10.0.0.181:3306": {"address": "10.0.0.181:3306", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27"}, "10.0.0.183:3306": {"address": "10.0.0.183:3306", "instanceErrors": ["NOTE: group_replication is stopped."], "memberState": "OFFLINE", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "(MISSING)", "version": "8.0.27"}}, "topologyMode": "Single-Primary"}, "groupInformationSourceMember": "10.0.0.179:3306"}' status: completed timing: completed: 2021-12-09 15:47:12 +0000 UTC enqueued: 2021-12-09 15:47:02 +0000 UTC started: 2021-12-09 15:47:05 +0000 UTC