In a 5 (4 servers + 1 witness) node cluster across WAN, we are facing the same bug using percona cluster (percona-xtradb-cluster-5.6 (5.6.22-25.8-978.wheezy)) with mysql Ver 14.14 Distrib 5.6.22-72.0 installed from the percona repository on debian 7.8.
The witness is using garbd 3.8.rf6147dd.
When the load balancing is enabled for writes on all nodes we are hitting that bug too.
2015-04-22 09:16:06 11522 [Note] WSREP: Closing send monitor...
2015-04-22 09:16:06 11522 [Note] WSREP: Closed send monitor.
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: terminating thread
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: joining thread
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: closing backend
... skipping notes ...
2015-04-22 09:16:09 11522 [Note] InnoDB: FTS optimize thread exiting.
2015-04-22 09:16:09 11522 [Note] InnoDB: Starting shutdown...
09:16:09 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster
key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=6
max_threads=802
thread_count=13
connection_count=5
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 328325 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7f782413f720
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f789030de50 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8ccd9e]
/usr/sbin/mysqld(handle_fatal_signal+0x36c)[0x6828dc]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf0a0)[0x7f788fef40a0]
/usr/sbin/mysqld(_Z14ip_to_hostnameP16sockaddr_storagePKcPPcPj+0x188)[0x5e9658]
/usr/sbin/mysqld[0x6cf1fc]
/usr/sbin/mysqld(_Z16login_connectionP3THD+0x46)[0x6d0666]
/usr/sbin/mysqld(_Z22thd_prepare_connectionP3THD+0x24)[0x6d0d34]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x137)[0x6d1067]
/usr/sbin/mysqld(handle_one_connection+0x42)[0x6d1262]
/usr/sbin/mysqld(pfs_spawn_thread+0x140)[0xb195d0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50)[0x7f788feebb50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f788e1d795d]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 5904
Status: NOT_KILLED
You may download the Percona XtraDB Cluster operations manual by visiting http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
The nodes have a basic configuration and all the same.
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
wsrep_cluster_address=gcomm://<%= @all_node_addresses %>
# Comment the above cluster_address and uncomment the below cluster_address to bootstrap cluster
#wsrep_cluster_address=gcomm://
In a 5 (4 servers + 1 witness) node cluster across WAN, we are facing the same bug using percona cluster (percona- xtradb- cluster- 5.6 (5.6.22- 25.8-978. wheezy) ) with mysql Ver 14.14 Distrib 5.6.22-72.0 installed from the percona repository on debian 7.8.
The witness is using garbd 3.8.rf6147dd.
When the load balancing is enabled for writes on all nodes we are hitting that bug too.
Error Logs:
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_ lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001) src/trx_ handle. cpp:apply( ):351 lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001) src/trx_ handle. cpp:apply( ):351 lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001) src/trx_ handle. cpp:apply( ):351 lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001) e8ae-11e4- 8f62-8a8a219c1d e4 version: 3 local: 1 state: REPLAYING flags: 1 conn_id: 5899 trx_id: 51194495 seq
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply app buffer: seqno: 1057001, status: 1
at galera/
Retrying 2th time
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply app buffer: seqno: 1057001, status: 1
at galera/
Retrying 3th time
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply app buffer: seqno: 1057001, status: 1
at galera/
Retrying 4th time
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: failed to replay trx: source: 46176511-
nos (l: 8584, g: 1057001, s: 1056993, d: 1057000, ts: 2324575036984990)
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply trx 1057001 4 times
2015-04-22 09:16:04 11522 [ERROR] WSREP: trx_replay failed for: 6, query: void
2015-04-22 09:16:04 11522 [ERROR] Aborting
2015-04-22 09:16:06 11522 [Note] WSREP: Closing send monitor... /bugs.launchpad .net/percona- xtradb- cluster
2015-04-22 09:16:06 11522 [Note] WSREP: Closed send monitor.
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: terminating thread
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: joining thread
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: closing backend
... skipping notes ...
2015-04-22 09:16:09 11522 [Note] InnoDB: FTS optimize thread exiting.
2015-04-22 09:16:09 11522 [Note] InnoDB: Starting shutdown...
09:16:09 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https:/
key_buffer_ size=8388608 size=131072 connections= 6 size)*max_ threads = 328325 K bytes of memory
read_buffer_
max_used_
max_threads=802
thread_count=13
connection_count=5
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7f782413f720 mysqld( my_print_ stacktrace+ 0x2e)[0x8ccd9e] mysqld( handle_ fatal_signal+ 0x36c)[ 0x6828dc] 64-linux- gnu/libpthread. so.0(+0xf0a0) [0x7f788fef40a0 ] mysqld( _Z14ip_ to_hostnameP16s ockaddr_ storagePKcPPcPj +0x188) [0x5e9658] mysqld[ 0x6cf1fc] mysqld( _Z16login_ connectionP3THD +0x46)[ 0x6d0666] mysqld( _Z22thd_ prepare_ connectionP3THD +0x24)[ 0x6d0d34] mysqld( _Z24do_ handle_ one_connectionP 3THD+0x137) [0x6d1067] mysqld( handle_ one_connection+ 0x42)[0x6d1262] mysqld( pfs_spawn_ thread+ 0x140)[ 0xb195d0] 64-linux- gnu/libpthread. so.0(+0x6b50) [0x7f788feebb50 ] 64-linux- gnu/libc. so.6(clone+ 0x6d)[0x7f788e1 d795d]
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f789030de50 thread_stack 0x40000
/usr/sbin/
/usr/sbin/
/lib/x86_
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/lib/x86_
/lib/x86_
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 5904
Status: NOT_KILLED
You may download the Percona XtraDB Cluster operations manual by visiting www.percona. com/software/ percona- xtradb- cluster/. You may find information
http://
in the manual which will help you identify the cause of the crash.
The nodes have a basic configuration and all the same.
[client] mysqld/ mysqld. sock
port = 3306
socket = /var/run/
[mysqld] storage_ engine= InnoDB cache_size= 128
binlog_format=ROW
default_
datadir = /var/lib/mysql
thread_
max_connections = 800 db-dir= lost+found
max_allowed_packet = 64M
ignore-
innodb_ autoinc_ lock_mode= 2 locks_unsafe_ for_binlog= 1 buffer_ pool_size= 1G log_file_ size=100M file_per_ table flush_log_ at_trx_ commit= 2
innodb_
innodb_
innodb_
innodb_
innodb_
wsrep_provider= /usr/lib/ libgalera_ smm.so method= xtrabackup- v2 _myisam= 1 threads= 8 name=webmail_ percona address= <%= @node_address %>
wsrep_sst_
wsrep_replicate
wsrep_slave_
wsrep_cluster_
wsrep_node_
wsrep_sst_auth=<%= @user_and_password %>
wsrep_node_name=<%= @node_name %>
wsrep_cluster_ address= gcomm:/ /<%= @all_node_addresses %> cluster_ address= gcomm:/ /
# Comment the above cluster_address and uncomment the below cluster_address to bootstrap cluster
#wsrep_
wsrep_provider_ options = "evs.send_ window= 512; evs.user_ send_window= 512; gmcast.segment=<%= @segment %>; evs.keepalive_ period = PT3S; evs.inactive_ check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_ timeout = PT1M; evs.install_timeout = PT1M; gcache.size = 1024M; socket. ssl_cert= /etc/mysql/ replication. crt; socket. ssl_key= /etc/mysql/ replication. key; socket.ssl = yes"
Fleu.