Cluster members on separate subnets not supported.

Bug #1926460 reported by Nobuto Murata
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
MySQL InnoDB Cluster Charm
Fix Released
Critical
Unassigned

Bug Description

In an environment without a common L2 network across all of the mysql units (e.g. no shared L2 between racks with MAAS, or AWS VPC), mysql units will reject connections from other units, and bootstrap never completes.

$ juju bootstrap aws/ap-northeast-1

$ juju spaces
Name Space ID Subnets
alpha 0 172.31.0.0/20
                 172.31.16.0/20
                 172.31.32.0/20
                 252.0.0.0/12
                 252.16.0.0/12
                 252.32.0.0/12

$ juju deploy --series focal -n3 cs:~openstack-charmers-next/mysql-innodb-cluster
Located charm "cs:~openstack-charmers-next/mysql-innodb-cluster-74".
Deploying charm "cs:~openstack-charmers-next/mysql-innodb-cluster-74".

"group_replication_ip_allowlist" is set as AUTOMATIC (default). However, it assumes common L2 networks so I think we need to manage the allowlist explicitly for pure L3 environment.

mysql> SHOW GLOBAL VARIABLES LIKE 'group_replication%list';
+--------------------------------+-----------+
| Variable_name | Value |
+--------------------------------+-----------+
| group_replication_ip_allowlist | AUTOMATIC |
| group_replication_ip_whitelist | AUTOMATIC |
+--------------------------------+-----------+
2 rows in set (0.01 sec)

https://dev.mysql.com/doc/refman/8.0/en/group-replication-options.html#sysvar_group_replication_ip_allowlist

[mysql.err on leader]
2021-04-28T14:48:56.073823Z 0 [System] [MY-011507] [Repl] Plugin group_replication reported: 'A new primary with address 172.31.36.189:3306 was elected. The new primary will execute all previous group transa
ctions before allowing writes.'
2021-04-28T14:48:56.074753Z 29 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'
2021-04-28T14:48:56.074931Z 29 [System] [MY-011510] [Repl] Plugin group_replication reported: 'This server is working as primary member.'
2021-04-28T14:48:56.728594Z 12 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_l
og_pos= 4, master_bind=''. New state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2021-04-28T14:50:33.105783Z 0 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Connection attempt from IP address ::ffff:172.31.3.41 refused. Address is not in the IP allowlist.'
2021-04-28T14:50:33.205767Z 0 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Connection attempt from IP address ::ffff:172.31.3.41 refused. Address is not in the IP allowlist.'

[mysql.err on another unit]

2021-04-28T14:50:32.856401Z 13 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2021-04-28T14:50:32.885916Z 16 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2021-04-28T14:50:33.103868Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to 172.31.36.189:33061 on local port: 33061.'
2021-04-28T14:50:33.203606Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error on opening a connection to 172.31.36.189:33061 on local port: 33061.'

Revision history for this message
Nobuto Murata (nobuto) wrote :

::ffff: was a red herring. I just need to check ACL again.

Changed in charm-mysql-innodb-cluster:
status: New → Incomplete
Revision history for this message
Nobuto Murata (nobuto) wrote :

ACL of replication looks expected.

mysql> SHOW GLOBAL VARIABLES LIKE 'group_replication%list';
+--------------------------------+-----------+
| Variable_name | Value |
+--------------------------------+-----------+
| group_replication_ip_allowlist | AUTOMATIC |
| group_replication_ip_whitelist | AUTOMATIC |
+--------------------------------+-----------+
2 rows in set (0.01 sec)

mysql> SELECT user, host FROM mysql.user;
+---------------------------+---------------+
| user | host |
+---------------------------+---------------+
| mysql_innodb_cluster_1000 | % |
| clusteruser | 172.31.20.6 |
| clusteruser | 172.31.3.41 |
| clusteruser | 172.31.36.189 |
| clusteruser | localhost |
| debian-sys-maint | localhost |
| mysql.infoschema | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+---------------------------+---------------+
10 rows in set (0.00 sec)

Revision history for this message
Nobuto Murata (nobuto) wrote : Re: Fails to bootstrap on AWS - Connection attempt from IP address ::ffff:<IPv4> refused

Not sure why this is happening, but it always happens to AWS env to me. The unit has multiple subnets attached including FAN network one (enabled by default with Juju AWS provider) fwiw.

summary: - Fails to bootstrap with IPv6 enabled env such as AWS - Connection
- attempt from IP address ::ffff:<IPv4> refused
+ Fails to bootstrap on AWS - Connection attempt from IP address
+ ::ffff:<IPv4> refused
Changed in charm-mysql-innodb-cluster:
status: Incomplete → New
Revision history for this message
Nobuto Murata (nobuto) wrote : Re: Fails to bootstrap with pure L3 - Connection attempt from IP address ::ffff:<IPv4> refused

Confirmed group_replication_ip_allowlist is the root cause by manually editing the template for testing.

$ git diff
diff --git a/src/templates/mysqld.cnf b/src/templates/mysqld.cnf
index 8940bb4..e829b29 100644
--- a/src/templates/mysqld.cnf
+++ b/src/templates/mysqld.cnf
@@ -102,6 +102,9 @@ enforce_gtid_consistency = ON
 gtid_mode = ON
 server_id = {{ options.server_id }}

+plugin_load_add = group_replication.so
+group_replication_ip_allowlist = 172.31.0.0/20,172.31.16.0/20,172.31.32.0/20
+
 skip_name_resolve = ON

 #

summary: - Fails to bootstrap on AWS - Connection attempt from IP address
+ Fails to bootstrap with pure L3 - Connection attempt from IP address
::ffff:<IPv4> refused
description: updated
Revision history for this message
Dan Ardelean (danardelean) wrote :

Hi,

I have a similar env with juju manual provider plus fan networking and hit the same bug. The group_replication_ip_allowlist on each unit fix worked. My fan subnets are:

group_replication_ip_allowlist = 252.152.11.0/20,252.152.12.0/20,252.152.13.0/20

Revision history for this message
Aymen Frikha (aym-frikha) wrote :

subscribed field-critical

Revision history for this message
Pedro Guimarães (pguimaraes) wrote :

Mysql units should share their own network CIDR, or maybe a /32 for each IP, for the target space on the peer relation data.
Then, each unit can construct the group_replication_ip_allowlist parameter including all its peers.

Revision history for this message
James Page (james-page) wrote :

The subnets the fan uses are not automatically includes in the IP allowlist that MySQL 8 default to - which is basically all RFC1918 ranges:

https://dev.mysql.com/doc/refman/8.0/en/group-replication-ip-address-permissions.html

So use of fan networking would definitely not work with this charm right now.

Revision history for this message
Nobuto Murata (nobuto) wrote :

There is another bug talking bout non-RFC1918 ranges: https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1886934

Revision history for this message
James Page (james-page) wrote :

The AUTOMATIC behaviour only permits addresses in subnets attached to the local unit to be considered as permitted - rather than the full RFC1918 range as I thought was documented.

This is very much a behaviour bug - the charm should work in a L3 routed topology as it does not require the use of VIP's or suchlike.

Changed in charm-mysql-innodb-cluster:
status: New → Triaged
importance: Undecided → Critical
assignee: nobody → James Page (james-page)
status: Triaged → In Progress
Revision history for this message
James Page (james-page) wrote :

I think the *best* fix would be to use the cluster addresses of all units in the application to populate the group_replication_ip_allowlist as Pedro detail in #8 - this will require a feature in the interface codebase and a minor template change.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-mysql-innodb-cluster (master)
Revision history for this message
James Page (james-page) wrote : Re: Fails to bootstrap with pure L3 - Connection attempt from IP address ::ffff:<IPv4> refused
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-mysql-innodb-cluster (master)

Change abandoned by "James Page <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/charm-mysql-innodb-cluster/+/795535

Revision history for this message
James Page (james-page) wrote : Re: Fails to bootstrap with pure L3 - Connection attempt from IP address ::ffff:<IPv4> refused
Changed in charm-mysql-innodb-cluster:
status: In Progress → Fix Committed
Revision history for this message
Nobuto Murata (nobuto) wrote :
Revision history for this message
Nobuto Murata (nobuto) wrote :

I'm testing -next as revision 78 right now with the original environment as AWS, but I see some errors. I will attach the full debug-log output.

$ juju spaces
Name Space ID Subnets
alpha 0 172.31.0.0/20
                 172.31.16.0/20
                 172.31.32.0/20
                 252.0.0.0/12
                 252.16.0.0/12
                 252.32.0.0/12

$ juju deploy --series focal -n3 cs:~openstack-charmers-next/mysql-innodb-cluster
...
Located charm "mysql-innodb-cluster" in charm-store, revision 78

machine-2: 22:18:22 INFO unit.mysql-innodb-cluster/2.juju-log cluster:1: Invoking reactive handler: reactive/mysql_innodb_cluster_handlers.py:132:check_quorum
machine-0: 22:18:22 ERROR unit.mysql-innodb-cluster/0.juju-log cluster:1: Hook error:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.8/site-packages/netaddr/ip/__init__.py", line 803, in parse_ip_network
    prefixlen = int(val2)
ValueError: invalid literal for int() with base 10: 'None'

unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed File "lib/charm/openstack/mysql_innodb_cluster.py", line 581, in get_cluster_subnets
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed return list(set([ch_net_ip.resolve_network_cidr(ip) for ip in ips]))
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed File "lib/charm/openstack/mysql_innodb_cluster.py", line 581, in <listcomp>
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed return list(set([ch_net_ip.resolve_network_cidr(ip) for ip in ips]))
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.8/site-packages/charmhelpers/cont
rib/network/ip.py", line 233, in resolve_network_cidr
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.8/site-packages/netaddr/ip/__init
__.py", line 938, in __init__
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed raise AddrFormatError('invalid IPNetwork %s' % addr)
unit-mysql-innodb-cluster-0: 22:18:22 WARNING unit.mysql-innodb-cluster/0.cluster-relation-changed netaddr.core.AddrFormatError: invalid IPNetwork 172.31.29.80/None

Revision history for this message
Nobuto Murata (nobuto) wrote :

$ for i in {0..2}; do juju ssh mysql-innodb-cluster/$i 'hostname; ip -br a' 2>/dev/null; done
ip-172-31-40-28
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 172.31.40.28/20 fe80::482:9fff:fe9f:a5bd/64
fan-252 UP 252.40.28.1/8 fe80::1046:2fff:fe69:ca32/64
ftun0 UNKNOWN fe80::1046:2fff:fe69:ca32/64
ip-172-31-29-80
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 172.31.29.80/20 fe80::c5f:30ff:febd:1973/64
fan-252 UP 252.29.80.1/8 fe80::64d1:98ff:fe75:3382/64
ftun0 UNKNOWN fe80::64d1:98ff:fe75:3382/64
ip-172-31-0-158
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 172.31.0.158/20 fe80::852:cdff:fe94:f097/64
fan-252 UP 252.0.158.1/8 fe80::b8fb:acff:fe73:d5a9/64
ftun0 UNKNOWN fe80::b8fb:acff:fe73:d5a9/64

Revision history for this message
Nobuto Murata (nobuto) wrote :

The new patch tries to extract cider in get_cluster_subnets, however (at least on AWS) cluster_peer_addresses and self.cluster_address do not have netmask but just an IP address as follows.

self.cluster_peer_addresses + self.cluster_address = ['172.31.29.80', '172.31.0.158', '172.31.40.28']

So prefixlen will be None and it will break the subsequent steps.

Revision history for this message
David Ames (thedac) wrote (last edit ):

TRIAGE:

The bug is lack of support for separate subnets for cluster nodes.

Each of the nodes in the example is on a separate subnet:

$ ipcalc 172.31.40.28/20
Address: 172.31.40.28 10101100.00011111.0010 1000.00011100
Netmask: 255.255.240.0 = 20 11111111.11111111.1111 0000.00000000
Wildcard: 0.0.15.255 00000000.00000000.0000 1111.11111111
=>
Network: 172.31.32.0/20 10101100.00011111.0010 0000.00000000
HostMin: 172.31.32.1 10101100.00011111.0010 0000.00000001
HostMax: 172.31.47.254 10101100.00011111.0010 1111.11111110
Broadcast: 172.31.47.255 10101100.00011111.0010 1111.11111111
Hosts/Net: 4094 Class B, Private Internet

$ ipcalc 172.31.29.80/20
Address: 172.31.29.80 10101100.00011111.0001 1101.01010000
Netmask: 255.255.240.0 = 20 11111111.11111111.1111 0000.00000000
Wildcard: 0.0.15.255 00000000.00000000.0000 1111.11111111
=>
Network: 172.31.16.0/20 10101100.00011111.0001 0000.00000000
HostMin: 172.31.16.1 10101100.00011111.0001 0000.00000001
HostMax: 172.31.31.254 10101100.00011111.0001 1111.11111110
Broadcast: 172.31.31.255 10101100.00011111.0001 1111.11111111
Hosts/Net: 4094 Class B, Private Internet

$ ipcalc 172.31.0.158/20
Address: 172.31.0.158 10101100.00011111.0000 0000.10011110
Netmask: 255.255.240.0 = 20 11111111.11111111.1111 0000.00000000
Wildcard: 0.0.15.255 00000000.00000000.0000 1111.11111111
=>
Network: 172.31.0.0/20 10101100.00011111.0000 0000.00000000
HostMin: 172.31.0.1 10101100.00011111.0000 0000.00000001
HostMax: 172.31.15.254 10101100.00011111.0000 1111.11111110
Broadcast: 172.31.15.255 10101100.00011111.0000 1111.11111111
Hosts/Net: 4094 Class B, Private Internet

The get_cluster_subnets [0] method uses resolve_network_ip [1] which uses _get_for_address which is looking for a *local* interface address such that the remote IP is a member of the local IPs subnet. Thus it returns None.

We will need another mechanism to implement the discovery of CIDR networks for each cluster member. We likely need to set ingresss-address or egress-subnets or some other relation key with this information.

[0] https://github.com/openstack/charm-mysql-innodb-cluster/blob/master/src/lib/charm/openstack/mysql_innodb_cluster.py#L581
[1] https://github.com/juju/charm-helpers/blob/master/charmhelpers/contrib/network/ip.py#L227
[2] https://github.com/juju/charm-helpers/blob/master/charmhelpers/contrib/network/ip.py#L180-L202

Changed in charm-mysql-innodb-cluster:
status: Fix Committed → Triaged
assignee: James Page (james-page) → nobody
summary: - Fails to bootstrap with pure L3 - Connection attempt from IP address
- ::ffff:<IPv4> refused
+ Cluster members on separate subnets not supported.
Revision history for this message
James Page (james-page) wrote :

Rather than complicating the relation semantics further with network CIDR discovery can't we just use a list of /32's in the allowlist configuration to limit the cluster members to the actual IP's of the units in the deployment? This avoids the need to resolve the network CIDR's and would seem to resolve this problem by making the allowlist super specific.

I appreciate that if a new unit is then added this will always require use of the action.

Revision history for this message
Nobuto Murata (nobuto) wrote :

Either using /32 for every unit or falling back to /32 when no common subnet found (i.e. if None, then /32) would work for me.

Revision history for this message
Nobuto Murata (nobuto) wrote :

when falling back to /32, we don't have to put it in group_replication_ip_allowlist explicitly. Since the option will accept anything, IP address, CIDR, etc.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-mysql-innodb-cluster (master)
Changed in charm-mysql-innodb-cluster:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-mysql-innodb-cluster (master)

Reviewed: https://review.opendev.org/c/openstack/charm-mysql-innodb-cluster/+/795989
Committed: https://opendev.org/openstack/charm-mysql-innodb-cluster/commit/737179482c2bfb209a7d322793b4ac925181b639
Submitter: "Zuul (22348)"
Branch: master

commit 737179482c2bfb209a7d322793b4ac925181b639
Author: James Page <email address hidden>
Date: Fri Jun 11 12:14:59 2021 +0100

    clustering: tweak allowlist generation

    Instead of trying to resolve the network CIDR from the local unit
    for all units in the cluster just use the actual IP addresses of
    the cluster unit when generating the IP allowlist for cluster
    connectivity.

    Also add the network CIDR for the local units cluster address
    which is the only one that will be guaranteed to be resolvable.

    For deployments where all units are on the same Layer 2 network
    addition of units with complete automatically - in Layer 3
    routed network topologies new units will be blocked until the
    update-unit-acls action is executed which is a service
    disruption operation.

    Closes-Bug: 1926460
    Change-Id: I16e43c37e1af02fb0e23a9c460d70bf5e1dd0fb1

Changed in charm-mysql-innodb-cluster:
status: In Progress → Fix Committed
Revision history for this message
Nobuto Murata (nobuto) wrote :

This is a follow-up bug for a scaleback scenario.
https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1931884

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-mysql-innodb-cluster (stable/21.04)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-mysql-innodb-cluster (master)
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

Writing some release notes.

Changed in charm-guide:
status: New → In Progress
assignee: nobody → Aurelien Lourot (aurelien-lourot)
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

Pausing writing the release notes, see my comment in https://review.opendev.org/c/openstack/charm-mysql-innodb-cluster/+/799954/

Changed in charm-guide:
status: In Progress → Triaged
assignee: Aurelien Lourot (aurelien-lourot) → nobody
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-mysql-innodb-cluster (master)

Reviewed: https://review.opendev.org/c/openstack/charm-mysql-innodb-cluster/+/807710
Committed: https://opendev.org/openstack/charm-mysql-innodb-cluster/commit/311ae13fd55c23be644189ef0fdfb333f201ff3d
Submitter: "Zuul (22348)"
Branch: master

commit 311ae13fd55c23be644189ef0fdfb333f201ff3d
Author: Aurelien Lourot <email address hidden>
Date: Tue Sep 7 14:18:35 2021 +0200

    Documentation improvements

    Change-Id: Iee0e307203c4a12e78097321cd5e446e77ab0a76
    Related-Bug: #1926460

Changed in charm-mysql-innodb-cluster:
milestone: none → 21.10
Changed in charm-mysql-innodb-cluster:
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-mysql-innodb-cluster (stable/21.04)

Change abandoned by "James Page <email address hidden>" on branch: stable/21.04
Review: https://review.opendev.org/c/openstack/charm-mysql-innodb-cluster/+/799955
Reason: Included in 21.10 release

no longer affects: charm-guide
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.