Percona Server Assertion Failure in ha_innodb.cc
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona Server moved to https://jira.percona.com/projects/PS |
Expired
|
Undecided
|
Unassigned | ||
5.5 |
Expired
|
Undecided
|
Unassigned |
Bug Description
Posted this on the Percona forum and was told to post this over here. Quick background:
Running Percona Server 5.5.22 with over 2500 databases (we're a SaaS provider) with ~100 tables each. I'm continuing to see the assertion failures pop up. 5 days will go by without issue, taking both reads and writes, but then it'll crash. Then after recovery it will immediately crash 2 days later. Rinse/repeat. I've restored from backups (ec2-consistent
Error log:
121202 1:38:54 InnoDB: Assertion failure in thread 140707766650624 in file ha_innodb.cc line 4220
InnoDB: Failing assertion: share->
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to x
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB:
InnoDB: about forcing recovery.
01:38:54 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
key_buffer_
read_buffer_
max_used_
max_threads=500
thread_count=16
connection_count=16
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x88dd16c0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7ff91472be58 thread_stack 0x40000
/usr/sbin/
/usr/sbin/
/lib64/
/lib64/
/lib64/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/lib64/
/lib64/
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7ff73ac24930): is an invalid pointer
Connection ID (thread ID): 5798290
Status: NOT_KILLED
The manual page at x contains
information that should help you find out what is causing the crash.
121202 01:38:57 mysqld_safe Number of processes running now: 0
I've tried everything to reproduce the issue, but even with the general log on, I can't find the exact statement that crashes the server. Next step would be to dump network traffic and then analyze once it crashes.
Upgraded to 5.5.28 to check whether the issue still occurs. I will update this ticket when either the server crashes again or it survives a full week.
If it continues to crash, I would be interested in investing in a support contract to get this looked at.