Inserts to MyISAM Tables from Async Master Does not Propagate to all Nodes

Bug #1171640 reported by Jervin R
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Status tracked in 5.6
5.5
Confirmed
Wishlist
Unassigned
5.6
Confirmed
Wishlist
Unassigned

Bug Description

While support for MyISAM is experimental/not fully supported - I thought I'd still report some bugs - maybe for documentating limitations for this support?

I have an async master running 5.5.30-30.1 and a 2 node cluster running 5.5.30-23.7.4 (does not seem to work with 5.5.29 neither) with these configuration:

# Async master:
log-bin=mysql-bin
log_slave_updates
server-id=99
binlog_format = ROW
default-storage-engine=INNODB

#PXC node 1, async slave:
binlog_format = ROW
sync_binlog = 0
binlog_cache_size = 1M
server_id = 100
log_slave_updates = 1
innodb_file_format = barracuda
innodb_strict_mode = 1
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 512M
innodb_log_buffer_size = 2M
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 50
innodb_fast_shutdown = 1
innodb_flush_method = O_DIRECT
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
innodb_doublewrite = 1
wsrep_node_name = uxdbc01
wsrep_cluster_name = uxdbc
wsrep_debug = 1
wsrep_node_address = 192.168.56.53
wsrep_node_incoming_address = 192.168.56.53
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_provider_options = "ist.recv_addr=192.168.56.53:4568;gmcast.listen_addr=tcp://192.168.56.53:4567;"
wsrep_slave_threads = 2
wsrep_sst_method = xtrabackup
wsrep_sst_receive_address = 192.168.56.53
wsrep_cluster_address = gcomm://
wsrep_replicate_myisam = 1
wsrep_auto_increment_control = 0
wsrep_certify_nonPK = 1

# PXC node 2
binlog_format = ROW
sync_binlog = 0
binlog_cache_size = 1M
server_id = 100
log_slave_updates = 1

innodb_file_format = barracuda
innodb_strict_mode = 1
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 256M
innodb_log_buffer_size = 2M
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 50
innodb_fast_shutdown = 1
innodb_flush_method = O_DIRECT
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
innodb_doublewrite = 1

wsrep_node_name = uxdbc02
wsrep_cluster_name = uxdbc
wsrep_debug = 1
wsrep_node_address = 192.168.56.54
wsrep_node_incoming_address = 192.168.56.54
wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_provider_options = "ist.recv_addr=192.168.56.54:4568;gmcast.listen_addr=tcp://192.168.56.54:4567;"
wsrep_slave_threads = 2
wsrep_sst_method = xtrabackup
wsrep_sst_receive_address = 192.168.56.54
wsrep_cluster_address = gcomm://192.168.56.53
wsrep_replicate_myisam = 1
wsrep_auto_increment_control = 0
wsrep_certify_nonPK = 1

To repeat, consider these 2 MyISAM tables, and one row data (from OpenX). Create an async master and build the PXC nodes from these data.

CREATE TABLE `data_bkt_m` (
  `interval_start` datetime NOT NULL,
  `creative_id` mediumint(9) NOT NULL,
  `zone_id` mediumint(9) NOT NULL,
  `count` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`interval_start`,`creative_id`,`zone_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

CREATE TABLE `audit` (
  `auditid` mediumint(9) NOT NULL AUTO_INCREMENT,
  `actionid` mediumint(9) NOT NULL,
  `context` varchar(255) NOT NULL DEFAULT '',
  `contextid` mediumint(9) DEFAULT NULL,
  `parentid` mediumint(9) DEFAULT NULL,
  `details` text NOT NULL,
  `userid` mediumint(9) NOT NULL DEFAULT '0',
  `username` varchar(64) DEFAULT NULL,
  `usertype` tinyint(4) NOT NULL DEFAULT '0',
  `updated` datetime DEFAULT NULL,
  `account_id` mediumint(9) NOT NULL,
  `advertiser_account_id` mediumint(9) DEFAULT NULL,
  `website_account_id` mediumint(9) DEFAULT NULL,
  PRIMARY KEY (`auditid`),
  KEY `audit_parentid_contextid` (`parentid`,`contextid`),
  KEY `audit_updated` (`updated`),
  KEY `audit_usertype` (`usertype`),
  KEY `audit_username` (`username`),
  KEY `audit_context_actionid` (`context`,`actionid`),
  KEY `audit_account_id` (`account_id`),
  KEY `audit_advertiser_account_id` (`advertiser_account_id`),
  KEY `audit_website_account_id` (`website_account_id`)
) ENGINE=MyISAM;

INSERT INTO data_bkt_m(interval_start, creative_id, zone_id, count) VALUES('2013-04-22 00:00:00', 22, 1, 30);

Then execute these 2 DMLs from the async node:

INSERT INTO audit (auditid, actionid,context, contextid, parentid, details, userid, username, usertype, updated, account_id, advertiser_account_id, website_account_id)
VALUES (2053, 2, 'clients', 1, NULL, 'a:2:{s:14:"reportlastdate";a:2:{s:3:"was";s:10:"2013-04-21";s:2:"is";s:10:"2013-04-22";}s:8:"key_desc";N;}', 0, NULL, 0, '2013-04-22 04:05:11', 2, 3, NULL) ;

UPDATE data_bkt_m SET count=31 WHERE interval_start='2013-04-22 00:00:00' AND creative_id=22 AND zone_id=1 AND count=30;

Neither of the updates from these 2 are being replicated properly, looks like coming from async replication stream the async slave PXC node cannot handle these events properly.

Jervin R (revin)
description: updated
Revision history for this message
Mrten (bugzilla-ii) wrote :

This could neatly explain why my edits on the master on the mysql database weren't replicated.

I have, for migration to PXC, a cluster slaved via standard mysql replication. Since I'm now working with a db on the webserver, all grants were either for localhost or 127.0.0.1, so I had to add grants for the new cluster IP-addres.

I dumped the mysql privilege tables, edited a column (localhost > local IP) and inserted the rows again, but this wasn't repeated on the slaved cluster. Only then I discovered that I had to turn on wsrep_replicate_myisam ;), but things still weren't replicated to all cluster members, so I gave up and went for a full SST by removing grastate.dat.

Revision history for this message
Przemek (pmalkowski) wrote :

I can reproduce the same, for MyISAM tables, ROW format binlog DML events are not passing the first (async slave) node, so all remaining nodes are becoming inconsistent.

Async master, running 5.5.37:

mysql> show variables like 'binlog_format';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| binlog_format | ROW |
+---------------+-------+
1 row in set (0.00 sec)

PXC slave node:

server55n1 mysql> select @@version,@@version_comment;
+----------------+-----------------------------------------------------------------------------------------------------+
| @@version | @@version_comment |
+----------------+-----------------------------------------------------------------------------------------------------+
| 5.5.37-35.0-55 | Percona XtraDB Cluster (GPL), Release rel35.0, Revision 756, WSREP version 25.10, wsrep_25.10.r3985 |
+----------------+-----------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

server55n1 mysql> select @@wsrep_replicate_myisam,@@log_slave_updates;
+--------------------------+---------------------+
| @@wsrep_replicate_myisam | @@log_slave_updates |
+--------------------------+---------------------+
| 1 | 1 |
+--------------------------+---------------------+
1 row in set (0.00 sec)

Master:
mysql> create table t1 (id int);
Query OK, 0 rows affected (0.00 sec)

mysql> create table t2 (id int) engine=myisam;
Query OK, 0 rows affected (0.00 sec)

mysql> insert into t1 values (1);
Query OK, 1 row affected (0.01 sec)

mysql> insert into t2 values (1);
Query OK, 1 row affected (0.00 sec)

server55n1 mysql> select * from t1;
+------+
| id |
+------+
| 1 |
+------+
1 row in set (0.00 sec)

server55n1 mysql> select * from t2;
+------+
| id |
+------+
| 1 |
+------+
1 row in set (0.00 sec)

Second node:

server55n2 mysql> select * from t1;
+------+
| id |
+------+
| 1 |
+------+
1 row in set (0.00 sec)

server55n2 mysql> select * from t2;
Empty set (0.00 sec)

So clearly the other nodes in the cluster as missing DMLs to MyISAM tables, despite the fact that both wsrep_replicate_myisam and log_slave_updates are enabled on all nodes.

Though replication works when the async master uses STATEMENT binlog format:
mysql> set binlog_format=STATEMENT;
Query OK, 0 rows affected (0.01 sec)

mysql> insert into t2 values (2);
Query OK, 1 row affected (0.00 sec)

server55n2 mysql> select * from t2;
+------+
| id |
+------+
| 2 |
+------+
1 row in set (0.00 sec)

Revision history for this message
Przemek (pmalkowski) wrote :

The same problem exists when I replicate from async master to PXC cluster running latest 5.6:
percona2 mysql> select @@version,@@version_comment;
+--------------------+---------------------------------------------------------------------------------------------------+
| @@version | @@version_comment |
+--------------------+---------------------------------------------------------------------------------------------------+
| 5.6.19-67.0-56-log | Percona XtraDB Cluster (GPL), Release rel67.0, Revision 824, WSREP version 25.6, wsrep_25.6.r4111 |
+--------------------+---------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Revision history for this message
Alex Yurchenko (ayurchen) wrote :

And I think there is an explanation in comment #2 already: all non-innodb replication events are captured and replicated in TOI at the parsing stage. Only InnoDB engine has wsrep integration. Hence wsrep plugin does not know anything about ROW events applied by MyISAM engine. STATEMENT events go through parser - and voi la, you have them replicated!

However this does not mean that ready MyISAM ROW events can't be replicated without without full MyISAM-wsrep integration. But some work needs to be done.

Revision history for this message
Krunal Bauskar (krunal-bauskar) wrote :

Current behavior is inline with design but more things is needed or expected. So marking as WISHLIST.

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-1162

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.