InnoDB failing assertion buf_page_in_file(bpage) and restarts
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona Server moved to https://jira.percona.com/projects/PS |
New
|
Undecided
|
Unassigned |
Bug Description
Hi,
On a server we're experiencing the following assertion failure:
101014 10:14:08 InnoDB: Assertion failure in thread 139663641130768 in file buf/buf0buf.c line 1645
InnoDB: Failing assertion: buf_page_
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://
InnoDB: about forcing recovery.
101014 10:14:08 - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
key_buffer_
read_buffer_
max_used_
max_threads=200
threads_
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0x7f05f4030ab0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f05f9b7aeb0 thread_stack 0x80000
/usr/sbin/
/usr/sbin/
/lib/libpthread
/lib/libc.
/lib/libc.
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/lib/libpthread
/lib/libc.
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x7f05f417d758 is an invalid pointer
thd->thread_
thd->killed=
The manual page at http://
information that should help you find out what is causing the crash.
mysqld: my_new.cc:51: int __cxa_pure_
101014 10:14:09 [Note] Plugin 'FEDERATED' is disabled.
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Compressed tables use zlib 1.2.3.3
101014 10:14:10 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 2081535906367
101014 10:14:10 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 2081541149184
InnoDB: Doing recovery: scanned up to log sequence number 2081546392064
InnoDB: Doing recovery: scanned up to log sequence number 2081551634944
InnoDB: Doing recovery: scanned up to log sequence number 2081556877824
InnoDB: Doing recovery: scanned up to log sequence number 2081562120704
InnoDB: Doing recovery: scanned up to log sequence number 2081567363584
InnoDB: Doing recovery: scanned up to log sequence number 2081572606464
InnoDB: Doing recovery: scanned up to log sequence number 2081577849344
InnoDB: Doing recovery: scanned up to log sequence number 2081583092224
InnoDB: Doing recovery: scanned up to log sequence number 2081588335104
InnoDB: Doing recovery: scanned up to log sequence number 2081593577984
InnoDB: Doing recovery: scanned up to log sequence number 2081598820864
InnoDB: Doing recovery: scanned up to log sequence number 2081604063744
InnoDB: Doing recovery: scanned up to log sequence number 2081609306624
InnoDB: Doing recovery: scanned up to log sequence number 2081614549504
InnoDB: Doing recovery: scanned up to log sequence number 2081619792384
InnoDB: Doing recovery: scanned up to log sequence number 2081625035264
InnoDB: Doing recovery: scanned up to log sequence number 2081630278144
InnoDB: Doing recovery: scanned up to log sequence number 2081635521024
InnoDB: Doing recovery: scanned up to log sequence number 2081640763904
InnoDB: Doing recovery: scanned up to log sequence number 2081646006784
InnoDB: Doing recovery: scanned up to log sequence number 2081651249664
InnoDB: Doing recovery: scanned up to log sequence number 2081656492544
InnoDB: Doing recovery: scanned up to log sequence number 2081661735424
InnoDB: Doing recovery: scanned up to log sequence number 2081666978304
InnoDB: Doing recovery: scanned up to log sequence number 2081672221184
InnoDB: Doing recovery: scanned up to log sequence number 2081677464064
InnoDB: Doing recovery: scanned up to log sequence number 2081682706944
InnoDB: Doing recovery: scanned up to log sequence number 2081687949824
InnoDB: Doing recovery: scanned up to log sequence number 2081688296852
101014 10:14:14 InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
101014 10:16:30 Percona XtraDB (http://
101014 10:16:31 [Note] Event Scheduler: Loaded 0 events
101014 10:16:31 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.1.50-rel11.4' socket: '/var/run/
Sometimes thd->query states a failing query, but it is completely random query, as are the crashes. Sometimes we get it 7-8 times a day, sometimes it doesn't show for a week. This happens with 5.1.42-xtradb, 5.1.47-percona, 5.1.49-percona and 5.1.50-percona on a Debian Lenny mixed with some testing packages, all mysql packages are installed via Percona repository. 5.1.42 was working most stable, and the severity of occurrence increases with server upgrades.
Also, found out that https:/
I'll try to provide more information if it's needed, but keep in mind that this is production server with 50GB+ data and the problem is not happening on any of the local test machines I've tried.
I also just got what appears to be the same thing, on the 5.1.54-rel12.5 FreeBSD amd64 binary:
Version: '5.1.54- rel12.5- log' socket: '/tmp/mysql.sock' port: 3306 Percona Server with XtraDB (GPL), Release 12.5, Revision 188 in_file( bpage) bugs.mysql. com. dev.mysql. com/doc/ refman/ 5.1/en/ forcing- recovery. html
110114 7:15:13 InnoDB: Assertion failure in thread 34380732992 in file buf/buf0buf.c line 2102
InnoDB: Failing assertion: buf_page_
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://
InnoDB: about forcing recovery.
110114 7:15:13 - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
key_buffer_ size=16777216 size=2097152 connections= 49 connected= 10 size)*max_ threads = 804188 K
read_buffer_
max_used_
max_threads=128
threads_
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Assertion failed: (! "Aborted: pure virtual method called."), function __cxa_pure_virtual, file my_new.cc, line 51.