Issues with replication from 5.5 node to 5.6 node

Bug #1267494 reported by Seppo Jaakola
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MySQL patches by Codership
Fix Released
High
Seppo Jaakola
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Status tracked in 5.6
5.5
Invalid
Undecided
Unassigned
5.6
Fix Released
Undecided
Unassigned

Bug Description

This bug tracks issues with replication from 5.5 node to 5.6 node. There is a separate bug with replication in opposite direction: https://bugs.launchpad.net/codership-mysql/+bug/1251137

Replication direction 5.5-> 5.6 is critical, as it will be needed with online migration to 5.6 cluster

Changed in codership-mysql:
status: New → In Progress
importance: Undecided → High
assignee: nobody → Seppo Jaakola (seppo-jaakola)
milestone: none → 5.6.15-25.2
description: updated
Revision history for this message
Seppo Jaakola (seppo-jaakola) wrote :

For migration to 5.6 cluster, we can assume that all 5.5 nodes will be upgraded first to use Galera replication 3.*, so that all nodes in the new cluster will run same Galera plugin version.

When testing such 5.5 -> 5.6 scenario, it turned out that parallel applying could hit a unresolvable conflict. But, after merging the fix for: https://bugs.launchpad.net/codership-mysql/+bug/1262887 , 5.5 -> 5.6 replication can run with several slave appliers in 5.6 node.
ATM, I see no issues with replication in this direction, will run more complex load still to verify.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

@Seppo,

"we can assume that all 5.5 nodes will be upgraded first to use
Galera replication 3.*,"

Is this (Galera 3 with 5.5 beforehand) a requirement for 5.5 to 5.6 replication with PA enabled? (or even with PA disabled)

This is because, if it is indeed a requirement, then before cluster upgrade
(to 56), all the -55 nodes will need to be upgraded to galera-3 first, and
to do this in rolling fashion, galera-3 needs to be started with
socket.checksum=1 (otherwise galera incompatibility fails replication).
Now, after all the upgrade, the nodes will probably need a restart with socket.checksum=1 removed (if they want to take advantage of galera-3's checksumming and since socket.checksum is not dynamic).

Revision history for this message
Alex Yurchenko (ayurchen) wrote :

I don't think the prior upgrade to 3.x is necessary, unless some 3.x-specific features are to be used (like preordered events). New writeset format should not be essential for 5.6 operation. However one more circle will still be needed to get rid of socket.checksum=1 (note that the last node to upgrade to 3.x does not need to set that). So, since we're still going for 2 rounds, it may be worthwhile, to exercise some caution and upgrade one at a time: first Galera and then MySQL version.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

@Alex,

> unless some 3.x-specific features are to be used (like preordered events).

We are talking during the upgrade, so it may not be using during that (are preordered events on by default?).

> New writeset format should not be essential for 5.6 operation

This is for 5.5 to 5.6 replication (Galera2 to Galera3) right

Yes, 2 rounds are needed if 3.x's checksumming is intended to be
used. With constraint of prior upgrade, 3 rounds will be
required.

So, to confirm, during upgrade with PA, Galera2 on PXC55 should do right?

Revision history for this message
Seppo Jaakola (seppo-jaakola) wrote :

Testing has not shown any further issues with replication in 5.5 -> 5.6 direction.

The problem with parallel applying (wsrep_slave_threads > 1), was fixed along with bug: lp:1262887

And the actual fixes pushed were:
http://bazaar.launchpad.net/~codership/codership-mysql/wsrep-5.5/revision/3934
http://bazaar.launchpad.net/~codership/codership-mysql/wsrep-5.5/revision/3939

Changed in codership-mysql:
status: In Progress → Fix Committed
Changed in codership-mysql:
status: Fix Committed → Fix Released
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-1571

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.