performance_schema.session_status doesn't exist on mysql add-unit

Bug #1942938 reported by Adam Dyess
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Percona Cluster Charm
New
Undecided
Unassigned

Bug Description

Using cs:percona-cluster-290

During a mysql add-unit of an already clustered mysql, I saw that the new unit came up and reached idle state, but all three units were like this:

mysql/0* waiting idle 0 10.1.208.152 3306/tcp Unit waiting for cluster bootstrap
  mysql-hacluster/2 active idle 10.1.208.152 Unit is ready and clustered
mysql/1 waiting idle 1 10.1.208.106 3306/tcp Unit waiting for cluster bootstrap
  mysql-hacluster/1* active idle 10.1.208.106 Unit is ready and clustered
mysql/3 waiting idle 3 10.1.208.54 3306/tcp Unit waiting for cluster bootstrap
  mysql-hacluster/5 active idle 10.1.208.54 Unit is ready and clustered
Machine State DNS Inst id Series AZ Message
0 started 10.1.208.152 juju-1c7884-0 xenial Running
1 started 10.1.208.106 juju-1c7884-1 xenial Running
3 started 10.1.208.54 juju-1c7884-3 bionic Running

Something interesting about this new unit is that it is a bionic unit rather than a xenial unit.

mysql/3 was missing the bootstrap-uuid on the cluster relation.
This was caused by the mysql db on its unit not having the performance_schema.session_status table. See LP#1942936

I was able to workaround this missing table with
https://tableplus.com/blog/2019/10/table-performance-schema-session-status-doesnt-exist.html

I'm not sure this is a valid work around or not

Revision history for this message
Adam Dyess (addyess) wrote :

The goal of this procedure was to drop one xenial unit at a time and replace with bionic units because working cloud lacked a safe way to DRU contrainers. This prevented the stable path of pause non-leaders, upgrade leader, etc...

As you can see above, i have two xenial-queens mysql units and one bionic-queens mysql unit.

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

This is an interesting use case. I'm not sure whether to mark this as invalid or wishlist.

The charms certainly weren't designed to support mixed versions of percona-cluster (mysql) in a single cluster. Xenial is percona-xtradb-cluster-server-5.5 and bionic is percona-xtradb-cluster-server-5.7 (unless you have upgraded it using a PPA to 5.7?)

More importantly, I don't know what the charms themselves will do with some of them on xenial and some of them on bionic. There are definitely different code-paths for xenial and bionic handling in the charm.

So if you can't do a release-upgrade from xenial->bionic, then realistically launching a new cluster on bionic and then dump/restore would seem (currently) to be the preferred approach.

Do you envisage many systems where this would be the only upgrade path (i.e. gradual replace rather than release-upgrade on unit)?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.