Repeated assertion failures on 5.6 when promoted master
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MySQL Server |
Unknown
|
Unknown
|
|||
Percona Server moved to https://jira.percona.com/projects/PS |
New
|
Undecided
|
Unassigned |
Bug Description
I have a server that has been successfully working as a Percona Server 5.5 for years (master, then slave).
We have successfully upgraded it with Percona Server 5.6 and it has worked properly as a slave for several weeks.
Once promoted to master, it is crashing repetitevely until switched back to a slave status.
The master workload is quite intensive (5TB dataset, 300,000 tables, +1000 simultaneous connections, up to 1000 queries per second), while the slave workload is more quiet.
I know that repeated assertion failures indicate corrupted InnoDB tablespace but the fact that everything works properly when the server is a slave, including full backups, makes me think we might have another kind of issue here.
Version : 5.6.31-rel77.0
It has happened 6 times within 24 hours. Sometimes very quickly after startup.
The error is *always* "Assertion failure in thread xxxin file trx0trx.cc line 1384
Failing assertion: UT_LIST_
Error log :
Version: '5.6.31-77.0-log' socket: '/var/run/
2016-08-18 15:49:49 2be931e43700 InnoDB: Assertion failure in thread 48280564414208 in file trx0trx.cc line 1384
InnoDB: Failing assertion: UT_LIST_
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://
InnoDB: about forcing recovery.
13:49:49 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://
key_buffer_
read_buffer_
max_used_
max_threads=3002
thread_count=434
connection_
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x2be7c814d000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 2be931e42e00 thread_stack 0x40000
/usr/local/
/usr/local/
/lib/x86_
/lib/x86_
/lib/x86_
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/usr/local/
/lib/x86_
/lib/x86_
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (2be7b7a91010): is an invalid pointer
Connection ID (thread ID): 5429
Status: NOT_KILLED
You may download the Percona Server operations manual by visiting
http://
in the manual which will help you identify the cause of the crash.
160818 15:49:53 mysqld_safe Number of processes running now: 0
160818 15:49:53 mysqld_safe mysqld restarted
Configuration :
[mysqld]
slave-skip-errors = 1062
log-slave-updates = ON
query_cache_size = 0
query_cache_type = 0
ssl-ca = /etc/mysql/
ssl-cert = /etc/mysql/
ssl-key = /etc/mysql/
skip-log-warnings
read_only = 0
basedir = /usr/local/mysql/
log-bin = /mysql/log/bin-log
relay-log = /mysql/
server-id = 1
binlog_format = STATEMENT
skip-name-resolve
skip-external-
user = mysql
pid-file = /var/run/
socket = /var/run/
port = 3306
datadir = /var/lib/mysql/data
tmpdir = /var/lib/mysql/tmp
bind-address = 0.0.0.0
key_buffer_size = 32M
read_buffer_size = 64K
read_rnd_
sort_buffer_size = 64K
max_allowed_packet = 200M
long_query_time = 0
max_connections = 3000
back_log = 500
delayed_queue_size = 50000
character-
default-
thread_cache_size = 100
expire_logs_days = 1
low-priority-
innodb_
innodb_
innodb_
innodb_flush_method = O_DIRECT
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_io_capacity = 6000
innodb_
performance_schema = OFF
innodb_
innodb_
innodb_file_format = barracuda
max-connect-errors = 1000000
log-error = /mysql/
innodb_
innodb_
innodb_
table_open_cache = 81920
innodb_open_files = 131072
open-files-limit = 166851
table-definitio
innodb_
innodb_read_ahead = 0
binlog_checksum = none
I've solved the problem by switching production back to a 5.5 server, and leaving this 5.6 as a slave.
Hi,
We have investigated on this crash and it is not related to concurrency or write operations.
It is actually triggered by a specific kind of request, with CONVERT_TZ, that was not run on the slaves.
This is an upstream MySQL bug, bugs.mysql. com/bug. php?id= 82910
We have created a ticket here with additional details and specific ways to reproduce the crash:
http://