> The crash is in slave SQL thread, are you using both mysql replication and Galera replication? Yes, I do. However, the crash is happening even when there is just one Galera node present (the one which is slave to the master running original MySQL 5.5). > Please show your configuration and overview of your cluster topology. It's quite simple. I was trying to migrate from master-slave to Galera-based replication. So, currently, I have 3 nodes: 1. node a is the original master; 2. node b is the original slave where I replaced MySQL with Percona XtraDB Cluster, enabled replication to synchronise to the master; 3. node c is an exact copy of the node b with slightly updated my.cnf to make it work with Galera. The relevant parts of my.cnf from nodes b and c follow: === node b === [mysqld] innodb_buffer_pool_size=4G innodb_file_per_table innodb_flush_log_at_trx_commit=0 innodb_log_file_size=128M innodb_flush_method=O_DIRECT table_open_cache=2048 query_cache_size=32M read_rnd_buffer_size = 1M binlog-format=ROW report-host=xtradb-1 relay-log=slave-bin log-bin=master-bin log-slave-updates server-id=10011 expire_logs_days=4 wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:5001;ist.recv_addr=192.168.5.1:6001; " wsrep_cluster_name=mgl_cluster wsrep_node_name=xtradb-1 wsrep_node_address=192.168.5.1 wsrep_slave_threads=8 wsrep_sst_method=xtrabackup wsrep_sst_receive_address=192.168.5.1:7001 default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 [mysqld_safe] wsrep_urls=gcomm://192.168.5.1:5001,gcomm://192.168.5.2:5001,gcomm:// === node c's my.cnf is the same except for the following bits: == node c === #log-slave-updates wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:5001;ist.recv_addr=192.168.5.2:6001; " wsrep_node_name=xtradb-2 wsrep_node_address=192.168.5.2 wsrep_sst_receive_address=192.168.5.2:7001 === Node a is running MySQL 5.5.18 and has the exact same config file (except for Galera specific bits). Now, when there is just node a running, I'm starting node b and wait until it successfully initialized. Then I start the MySQL replication to synchronize node b with the master (node a). At first I was getting a lot of the following messages: === WSREP: skipping FK key append === and then, at some point, I've got: === 14:11:22 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=8388608 read_buffer_size=131072 max_used_connections=11 max_threads=151 thread_count=9 connection_count=9 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338675 K bytes of memo ry Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x2ad1e4000990 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 2ad1d8ac3e78 thread_stack 0x40000 /usr/sbin/mysqld(my_print_stacktrace+0x35)[0x7c5fb5] /usr/sbin/mysqld(handle_fatal_signal+0x4a4)[0x6a00f4] /lib64/libpthread.so.0(+0xf500)[0x2ad06dee1500] /usr/sbin/mysqld(wsrep_append_foreign_key+0xa2)[0x816cc2] /usr/sbin/mysqld[0x84dc80] /usr/sbin/mysqld[0x85100e] /usr/sbin/mysqld[0x85218a] /usr/sbin/mysqld[0x83ba01] /usr/sbin/mysqld[0x81bc2f] /usr/sbin/mysqld(_ZN7handler13ha_delete_rowEPKh+0x5e)[0x6a4aee] /usr/sbin/mysqld(_ZN21Delete_rows_log_event11do_exec_rowEPK14Relay_log_info+0x148)[0x7428f8] /usr/sbin/mysqld(_ZN14Rows_log_event14do_apply_eventEPK14Relay_log_info+0x22d)[0x7480fd] /usr/sbin/mysqld(_Z26apply_event_and_update_posP9Log_eventP3THDP14Relay_log_info+0x125)[0x5317b5] /usr/sbin/mysqld[0x535af7] /usr/sbin/mysqld(handle_slave_sql+0xa45)[0x537025] /lib64/libpthread.so.0(+0x7851)[0x2ad06ded9851] /lib64/libc.so.6(clone+0x6d)[0x2ad06eca111d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): Connection ID (thread ID): 15 Status: NOT_KILLED === From this point forward I was getting segmentation fault every time I tried to start the slave thread on node b. I had to revert to the previous version of Percona XtraDB Cluster to be able to synchronize the databases. Hope this helps.