Hi,
I'd like to jump in on this as we're currently having issues with this on version Percona XtraDb 5.6.22-72.0-56-log
One of our tables which is re-created daily has grown in size. Initially no problems occurred, but as we're ~150k rows we are experiencing issues on our production server.
It was building a single insert statement that failed and caused the node to crash, after splitting this up the data inserted ok, but now fails creating the index.
[HY000]: General error: 2013 Lost connection to MySQL server during query. Query : [ CREATE FULLTEXT INDEX idx on catalog_production.product_catalog (product_code, brand_name, title, description)
13:18:29 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster
key_buffer_size=33554432
read_buffer_size=131072
max_used_connections=26
max_threads=502
thread_count=6
connection_count=4
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 233164 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x35)[0x8fa965]
/usr/sbin/mysqld(handle_fatal_signal+0x4b4)[0x665644]
/lib64/libpthread.so.0(+0xf710)[0x7f0c43341710]
/usr/sbin/mysqld[0xaa5d7e]
/lib64/libpthread.so.0(+0x79d1)[0x7f0c433399d1]
/lib64/libc.so.6(clone+0x6d)[0x7f0c4183d8fd]
You may download the Percona XtraDB Cluster operations manual by visiting http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
161019 14:18:30 mysqld_safe Number of processes running now: 0
161019 14:18:30 mysqld_safe WSREP: not restarting wsrep node automatically
161019 14:18:30 mysqld_safe mysqld from pid file /var/lib/mysql/mysql.pid ended
The my.cnf file was generated from percona tools using 8GB memory
[mysql]
# CLIENT #
port = 3306
socket = /var/lib/mysql/mysql.sock
[mysqld]
# GENERAL #
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid
Hi,
I'd like to jump in on this as we're currently having issues with this on version Percona XtraDb 5.6.22-72.0-56-log
One of our tables which is re-created daily has grown in size. Initially no problems occurred, but as we're ~150k rows we are experiencing issues on our production server.
It was building a single insert statement that failed and caused the node to crash, after splitting this up the data inserted ok, but now fails creating the index.
[HY000]: General error: 2013 Lost connection to MySQL server during query. Query : [ CREATE FULLTEXT INDEX idx on catalog_ production. product_ catalog (product_code, brand_name, title, description)
13:18:29 UTC - mysqld got signal 11 ; /bugs.launchpad .net/percona- xtradb- cluster
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https:/
key_buffer_ size=33554432 size=131072 connections= 26 size)*max_ threads = 233164 K bytes of memory
read_buffer_
max_used_
max_threads=502
thread_count=6
connection_count=4
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0 mysqld( my_print_ stacktrace+ 0x35)[0x8fa965] mysqld( handle_ fatal_signal+ 0x4b4)[ 0x665644] libpthread. so.0(+0xf710) [0x7f0c43341710 ] mysqld[ 0xaa5d7e] libpthread. so.0(+0x79d1) [0x7f0c433399d1 ] libc.so. 6(clone+ 0x6d)[0x7f0c418 3d8fd] www.percona. com/software/ percona- xtradb- cluster/. You may find information mysql/mysql. pid ended
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/
/usr/sbin/
/lib64/
/usr/sbin/
/lib64/
/lib64/
You may download the Percona XtraDB Cluster operations manual by visiting
http://
in the manual which will help you identify the cause of the crash.
161019 14:18:30 mysqld_safe Number of processes running now: 0
161019 14:18:30 mysqld_safe WSREP: not restarting wsrep node automatically
161019 14:18:30 mysqld_safe mysqld from pid file /var/lib/
The my.cnf file was generated from percona tools using 8GB memory
[mysql]
# CLIENT # mysql/mysql. sock
port = 3306
socket = /var/lib/
[mysqld]
# GENERAL # storage- engine = InnoDB mysql/mysql. sock mysql/mysql. pid
user = mysql
default-
socket = /var/lib/
pid-file = /var/lib/
# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP
# SAFETY #
max-allowed-packet = 256M
max-connect-errors = 1000000
# DATA STORAGE #
datadir = /var/lib/mysql/
tmpdir = /tmp
# BINARY LOGGING # mysql/mysql- bin
log-bin = /var/lib/
expire-logs-days = 14
sync-binlog = 1
# CACHES AND LIMITS # n-cache = 4096
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definitio
table-open-cache = 4096
# INNODB # log-files- in-group = 2 log-file- size = 2G flush-log- at-trx- commit = 1 file-per- table = 1 buffer- pool-size = 6G ft-min- token-size = 2
innodb-flush-method = O_DIRECT
innodb-
innodb-
innodb-
innodb-
innodb-
innodb-
# LOGGING # mysql/mysql- error.log not-using- indexes = 1 mysql/mysql- slow.log
log-error = /var/lib/
log-queries-
slow-query-log = 1
slow-query-log-file = /var/lib/
# This is being disabled to attempt to reduce memory footprint in v5.6 schema= 0
performance_
#Path to Galera library /usr/lib64/ libgalera_ smm.so
wsrep_provider=
# Cluster connection URL contains the IPs of node#1, node#2 and node#3 address= gcomm:/ /xxx.xxx. xxx.xxx, xxx.xxx. xxx.xxx, xxx.xxx. xxx.xxx
wsrep_cluster_
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support storage_ engine= InnoDB
default_
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera autoinc_ lock_mode= 2
innodb_
# Node address address= xxx.xxx. xxx.xxx
wsrep_node_
# SST method method= xtrabackup- v2
wsrep_sst_
# Cluster name name=xxx
wsrep_cluster_
# Authentication for SST method auth="sstuser: xxx"
wsrep_sst_