Comment 3 for bug 1414635

Revision history for this message
Adrien Fleury (fleu42) wrote :

In a 5 (4 servers + 1 witness) node cluster across WAN, we are facing the same bug using percona cluster (percona-xtradb-cluster-5.6 (5.6.22-25.8-978.wheezy)) with mysql Ver 14.14 Distrib 5.6.22-72.0 installed from the percona repository on debian 7.8.

The witness is using garbd 3.8.rf6147dd.

When the load balancing is enabled for writes on all nodes we are hitting that bug too.

Error Logs:

2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001)
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply app buffer: seqno: 1057001, status: 1
         at galera/src/trx_handle.cpp:apply():351
Retrying 2th time
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001)
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply app buffer: seqno: 1057001, status: 1
         at galera/src/trx_handle.cpp:apply():351
Retrying 3th time
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001)
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply app buffer: seqno: 1057001, status: 1
         at galera/src/trx_handle.cpp:apply():351
Retrying 4th time
2015-04-22 09:16:04 11522 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 1057001)
2015-04-22 09:16:04 11522 [Warning] WSREP: RBR event 3 Update_rows apply warning: 1615, 1057001
2015-04-22 09:16:04 11522 [Warning] WSREP: failed to replay trx: source: 46176511-e8ae-11e4-8f62-8a8a219c1de4 version: 3 local: 1 state: REPLAYING flags: 1 conn_id: 5899 trx_id: 51194495 seq
nos (l: 8584, g: 1057001, s: 1056993, d: 1057000, ts: 2324575036984990)
2015-04-22 09:16:04 11522 [Warning] WSREP: Failed to apply trx 1057001 4 times
2015-04-22 09:16:04 11522 [ERROR] WSREP: trx_replay failed for: 6, query: void
2015-04-22 09:16:04 11522 [ERROR] Aborting

2015-04-22 09:16:06 11522 [Note] WSREP: Closing send monitor...
2015-04-22 09:16:06 11522 [Note] WSREP: Closed send monitor.
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: terminating thread
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: joining thread
2015-04-22 09:16:06 11522 [Note] WSREP: gcomm: closing backend
... skipping notes ...
2015-04-22 09:16:09 11522 [Note] InnoDB: FTS optimize thread exiting.
2015-04-22 09:16:09 11522 [Note] InnoDB: Starting shutdown...
09:16:09 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=6
max_threads=802
thread_count=13
connection_count=5
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 328325 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f782413f720
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f789030de50 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8ccd9e]
/usr/sbin/mysqld(handle_fatal_signal+0x36c)[0x6828dc]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf0a0)[0x7f788fef40a0]
/usr/sbin/mysqld(_Z14ip_to_hostnameP16sockaddr_storagePKcPPcPj+0x188)[0x5e9658]
/usr/sbin/mysqld[0x6cf1fc]
/usr/sbin/mysqld(_Z16login_connectionP3THD+0x46)[0x6d0666]
/usr/sbin/mysqld(_Z22thd_prepare_connectionP3THD+0x24)[0x6d0d34]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x137)[0x6d1067]
/usr/sbin/mysqld(handle_one_connection+0x42)[0x6d1262]
/usr/sbin/mysqld(pfs_spawn_thread+0x140)[0xb195d0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50)[0x7f788feebb50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f788e1d795d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 5904
Status: NOT_KILLED

You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.

The nodes have a basic configuration and all the same.

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld]
binlog_format=ROW
default_storage_engine=InnoDB
datadir = /var/lib/mysql
thread_cache_size=128

max_connections = 800
max_allowed_packet = 64M
ignore-db-dir=lost+found

innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
innodb_buffer_pool_size=1G
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2

wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
wsrep_replicate_myisam=1
wsrep_slave_threads=8
wsrep_cluster_name=webmail_percona
wsrep_node_address=<%= @node_address %>
wsrep_sst_auth=<%= @user_and_password %>
wsrep_node_name=<%= @node_name %>

wsrep_cluster_address=gcomm://<%= @all_node_addresses %>
# Comment the above cluster_address and uncomment the below cluster_address to bootstrap cluster
#wsrep_cluster_address=gcomm://

wsrep_provider_options = "evs.send_window=512; evs.user_send_window=512; gmcast.segment=<%= @segment %>; evs.keepalive_period = PT3S; evs.inactive_check_period = PT10S; evs.suspect_timeout = PT30S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M; gcache.size = 1024M; socket.ssl_cert=/etc/mysql/replication.crt; socket.ssl_key=/etc/mysql/replication.key; socket.ssl = yes"

Fleu.