buf_LRU_free_page() crashes at !buf_page_hash_get_low( buf_pool, b->space, b->offset, fold)

Bug #1395543 reported by Aleksandr Kuzminsky on 2014-11-23
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona Server moved to https://jira.percona.com/projects/PS
Fix Released
High
Laurynas Biveinis
5.5
Invalid
High
Unassigned
5.6
Fix Released
High
Laurynas Biveinis

Bug Description

2014-11-22 10:31:53 7fcbe6ffd700 InnoDB: Assertion failure in thread 140513730615040 in file buf0lru.cc line 2079
InnoDB: Failing assertion: !buf_page_hash_get_low( buf_pool, b->space, b->offset, fold)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
10:31:53 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=33554432
read_buffer_size=2097152
max_used_connections=2685
max_threads=20002
thread_count=352
connection_count=350
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 205147325 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x90608c]
/usr/sbin/mysqld(handle_fatal_signal+0x352)[0x662992]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7fe57f432cb0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7fe57ea9d425]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7fe57eaa0b8b]
/usr/sbin/mysqld[0xa6aa3f]
/usr/sbin/mysqld[0xa5edaf]
/usr/sbin/mysqld[0xa64845]
/usr/sbin/mysqld[0xa65a5e]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7fe57f42ae9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fe57eb5b3fd]

tags: added: i46600
tags: added: i48376
removed: i46600

    This is caused by a race condition between purge (Thread 53) and
    LRU flushing (Thread 1).

    A plausible sequence leading to the crash:

    Thread 1 calls buf_LRU_free_page(bpage, false). bpage is an
    uncompressed frame of a compressed page. The code takes the page
    hash lock, calls buf_LRU_block_remove_hashed(bpage, false), which
    removes the page from the page hash and releases the hash lock.

    Then thread 53 takes the hash lock and calls buf_pool_watch_set
    for the same page. It does not find it in the page hash, unlocks
    the hash lock, locks all the hash locks, inserts a page into page
    hash, unlocks all but the original hash lock.

    Then thread 1 buf_LRU_free_page takes hash lock and the page
    mutex, finds the page in page_hash, crashes.

    Does not happen in upstream because there buf_pool_watch_set
    acquires the buffer pool mutex, rechecks page hash, and
    buf_LRU_free_page hold the buffer pool mutex throughout the race
    window.

tags: added: bp-split xtradb
summary: - Percona Server 5.6.20-68.0-657.precise crashes in file buf0lru.cc line
- 2079
+ buf_LRU_free_page() crashes at !buf_page_hash_get_low( buf_pool,
+ b->space, b->offset, fold)

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PS-851

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers