I got same problem and costs me some real hard days.
So to summarize:
1) Tip with --no-backup-locks did not work for @Medali
2) Next Tip: force flush with read lock ....
This is a great contrast, isn't it?
One does no locking - the other tip does a read lock explicitly.
-> 1) No locking should make the machine more responsive but could cause problems with myisam (more later)
-> 2) Could get you in great trouble with deadlocks
Both things locking tables (on one node, if all works fine) is a problem, read locks not much better.
A pitty is to read this (see last block) after searching for hints for hours: https://www.percona.com/doc/percona-xtradb-cluster/5.6/limitation.html
This also doesn't to be real safe. Hmm. (I wish this last block was in red and fat!)
I got same problem and costs me some real hard days.
So to summarize:
1) Tip with --no-backup-locks did not work for @Medali
2) Next Tip: force flush with read lock ....
This is a great contrast, isn't it?
One does no locking - the other tip does a read lock explicitly.
-> 1) No locking should make the machine more responsive but could cause problems with myisam (more later)
-> 2) Could get you in great trouble with deadlocks
Both things locking tables (on one node, if all works fine) is a problem, read locks not much better. /www.percona. com/doc/ percona- xtradb- cluster/ 5.6/limitation. html
A pitty is to read this (see last block) after searching for hints for hours:
https:/
This also doesn't to be real safe. Hmm. (I wish this last block was in red and fat!)
As I guess (still testing) the only solution would be this: /www.percona. com/blog/ 2012/03/ 23/how- flush-tables- with-read- lock-works- with-innodb- tables/
https:/
So don't do locking at all (perhaps forbit) and do innobackupex with:
--no-lock in combination with --save-slave-backup if interacting with a slave.
Any ideas?