Inserts to MyISAM Tables from Async Master Does not Propagate to all Nodes
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC | Status tracked in 5.6 | |||||
5.5 |
Confirmed
|
Wishlist
|
Unassigned | |||
5.6 |
Confirmed
|
Wishlist
|
Unassigned |
Bug Description
While support for MyISAM is experimental/not fully supported - I thought I'd still report some bugs - maybe for documentating limitations for this support?
I have an async master running 5.5.30-30.1 and a 2 node cluster running 5.5.30-23.7.4 (does not seem to work with 5.5.29 neither) with these configuration:
# Async master:
log-bin=mysql-bin
log_slave_updates
server-id=99
binlog_format = ROW
default-
#PXC node 1, async slave:
binlog_format = ROW
sync_binlog = 0
binlog_cache_size = 1M
server_id = 100
log_slave_updates = 1
innodb_file_format = barracuda
innodb_strict_mode = 1
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_flush_method = O_DIRECT
innodb_
innodb_
innodb_doublewrite = 1
wsrep_node_name = uxdbc01
wsrep_cluster_name = uxdbc
wsrep_debug = 1
wsrep_node_address = 192.168.56.53
wsrep_node_
wsrep_provider = /usr/lib/
wsrep_provider_
wsrep_slave_threads = 2
wsrep_sst_method = xtrabackup
wsrep_sst_
wsrep_cluster_
wsrep_replicate
wsrep_auto_
wsrep_certify_nonPK = 1
# PXC node 2
binlog_format = ROW
sync_binlog = 0
binlog_cache_size = 1M
server_id = 100
log_slave_updates = 1
innodb_file_format = barracuda
innodb_strict_mode = 1
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_flush_method = O_DIRECT
innodb_
innodb_
innodb_doublewrite = 1
wsrep_node_name = uxdbc02
wsrep_cluster_name = uxdbc
wsrep_debug = 1
wsrep_node_address = 192.168.56.54
wsrep_node_
wsrep_provider = /usr/lib/
wsrep_provider_
wsrep_slave_threads = 2
wsrep_sst_method = xtrabackup
wsrep_sst_
wsrep_cluster_
wsrep_replicate
wsrep_auto_
wsrep_certify_nonPK = 1
To repeat, consider these 2 MyISAM tables, and one row data (from OpenX). Create an async master and build the PXC nodes from these data.
CREATE TABLE `data_bkt_m` (
`interval_start` datetime NOT NULL,
`creative_id` mediumint(9) NOT NULL,
`zone_id` mediumint(9) NOT NULL,
`count` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`interval_
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE `audit` (
`auditid` mediumint(9) NOT NULL AUTO_INCREMENT,
`actionid` mediumint(9) NOT NULL,
`context` varchar(255) NOT NULL DEFAULT '',
`contextid` mediumint(9) DEFAULT NULL,
`parentid` mediumint(9) DEFAULT NULL,
`details` text NOT NULL,
`userid` mediumint(9) NOT NULL DEFAULT '0',
`username` varchar(64) DEFAULT NULL,
`usertype` tinyint(4) NOT NULL DEFAULT '0',
`updated` datetime DEFAULT NULL,
`account_id` mediumint(9) NOT NULL,
`advertiser_
`website_
PRIMARY KEY (`auditid`),
KEY `audit_
KEY `audit_updated` (`updated`),
KEY `audit_usertype` (`usertype`),
KEY `audit_username` (`username`),
KEY `audit_
KEY `audit_account_id` (`account_id`),
KEY `audit_
KEY `audit_
) ENGINE=MyISAM;
INSERT INTO data_bkt_
Then execute these 2 DMLs from the async node:
INSERT INTO audit (auditid, actionid,context, contextid, parentid, details, userid, username, usertype, updated, account_id, advertiser_
VALUES (2053, 2, 'clients', 1, NULL, 'a:2:{s:
UPDATE data_bkt_m SET count=31 WHERE interval_
Neither of the updates from these 2 are being replicated properly, looks like coming from async replication stream the async slave PXC node cannot handle these events properly.
This could neatly explain why my edits on the master on the mysql database weren't replicated.
I have, for migration to PXC, a cluster slaved via standard mysql replication. Since I'm now working with a db on the webserver, all grants were either for localhost or 127.0.0.1, so I had to add grants for the new cluster IP-addres.
I dumped the mysql privilege tables, edited a column (localhost > local IP) and inserted the rows again, but this wasn't repeated on the slaved cluster. Only then I discovered that I had to turn on wsrep_replicate _myisam ;), but things still weren't replicated to all cluster members, so I gave up and went for a full SST by removing grastate.dat.