InnoDB: Assertion failure in thread in file line 3932 | abort (sig=6) in row_search_for_mysql

Bug #1395249 reported by Roel Van de Paar on 2014-11-22
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona Server moved to
Status tracked in 5.7
Won't Fix

Bug Description

(gdb) bt
#0 0x00007fa08f79b771 in pthread_kill () from /lib64/
#1 0x0000000000a90dda in my_write_core (sig=6) at /bzr/5.6.hacked_dbg/mysys/stacktrace.c:422
#2 0x0000000000722176 in handle_fatal_signal (sig=6) at /bzr/5.6.hacked_dbg/sql/
#3 <signal handler called>
#4 0x00007fa08dd4e5c9 in raise () from /lib64/
#5 0x00007fa08dd4fcd8 in abort () from /lib64/
#6 0x0000000000c8bbe1 in row_search_for_mysql (buf=0x7f9fe4141c10 "\377\331\001", mode=2, prebuilt=0x7f9fe422b078, match_mode=1, direction=0) at /bzr/5.6.hacked_dbg/storage/innobase/row/
#7 0x0000000000b66dd1 in ha_innobase::index_read (this=0x7f9fef6fad10, buf=0x7f9fe4141c10 "\377\331\001", key_ptr=0x7f9fe280bb50 "\002", key_len=4, find_flag=HA_READ_KEY_EXACT) at /bzr/5.6.hacked_dbg/storage/innobase/handler/
#8 0x0000000000647a4a in handler::index_read_map (this=0x7f9fef6fad10, buf=0x7f9fe4141c10 "\377\331\001", key=0x7f9fe280bb50 "\002", keypart_map=1, find_flag=HA_READ_KEY_EXACT) at /bzr/5.6.hacked_dbg/sql/handler.h:2284
#9 0x0000000000644ffd in handler::index_read_idx_map (this=0x7f9fef6fad10, buf=0x7f9fe4141c10 "\377\331\001", index=0, key=0x7f9fe280bb50 "\002", keypart_map=1, find_flag=HA_READ_KEY_EXACT) at /bzr/5.6.hacked_dbg/sql/
#10 0x000000000063cbfc in handler::ha_index_read_idx_map (this=0x7f9fef6fad10, buf=0x7f9fe4141c10 "\377\331\001", index=0, key=0x7f9fe280bb50 "\002", keypart_map=1, find_flag=HA_READ_KEY_EXACT) at /bzr/5.6.hacked_dbg/sql/
#11 0x00000000007a36e8 in join_read_const (tab=0x7f9fe280b2e8) at /bzr/5.6.hacked_dbg/sql/
#12 0x00000000007a320a in join_read_const_table (tab=0x7f9fe280b2e8, pos=0x7f9fe280b7d8) at /bzr/5.6.hacked_dbg/sql/
#13 0x00000000009a9b1d in make_join_statistics (join=0x7f9fe280abb8, tables_arg=0x7f9fe280a228, conds=0x7f9fe280b0b0, keyuse_array=0x7f9fe280ade0, first_optimization=true) at /bzr/5.6.hacked_dbg/sql/
#14 0x00000000009a18a7 in JOIN::optimize (this=0x7f9fe280abb8) at /bzr/5.6.hacked_dbg/sql/
#15 0x00000000007ff413 in mysql_execute_select (thd=0x7f9fe0419000, select_lex=0x7f9fe041bc60, free_join=true) at /bzr/5.6.hacked_dbg/sql/
#16 0x00000000007ff74c in mysql_select (thd=0x7f9fe0419000, tables=0x7f9fe280a228, wild_num=0, fields=..., conds=0x7f9fe280a958, order=0x7f9fe041be28, group=0x7f9fe041bd60, having=0x0, select_options=2148797184, result=0x7f9fe280ab90, unit=0x7f9fe041b618, select_lex=0x7f9fe041bc60) at /bzr/5.6.hacked_dbg/sql/
#17 0x00000000007fd872 in handle_select (thd=0x7f9fe0419000, result=0x7f9fe280ab90, setup_tables_done_option=0) at /bzr/5.6.hacked_dbg/sql/
#18 0x00000000007d6fb3 in execute_sqlcom_select (thd=0x7f9fe0419000, all_tables=0x7f9fe280a228) at /bzr/5.6.hacked_dbg/sql/
#19 0x00000000007cf72a in mysql_execute_command (thd=0x7f9fe0419000) at /bzr/5.6.hacked_dbg/sql/
#20 0x00000000007d976b in mysql_parse (thd=0x7f9fe0419000, rawbuf=0x7f9fe280a010 "SELECT c FROM sbtest1 WHERE id=2", length=32, parser_state=0x7f9fd8f1acf0) at /bzr/5.6.hacked_dbg/sql/
#21 0x00000000007cc0b7 in dispatch_command (command=COM_QUERY, thd=0x7f9fe0419000, packet=0x7f9fef711001 "SELECT c FROM sbtest1 WHERE id=2", packet_length=32) at /bzr/5.6.hacked_dbg/sql/
#22 0x00000000007cb038 in do_command (thd=0x7f9fe0419000) at /bzr/5.6.hacked_dbg/sql/
#23 0x000000000079591e in do_handle_one_connection (thd_arg=0x7f9fe0419000) at /bzr/5.6.hacked_dbg/sql/
#24 0x0000000000795435 in handle_one_connection (arg=0x7f9fe0419000) at /bzr/5.6.hacked_dbg/sql/
#25 0x0000000000add6af in pfs_spawn_thread (arg=0x7f9fecb40520) at /bzr/5.6.hacked_dbg/storage/perfschema/
#26 0x00007fa08f796df3 in start_thread () from /lib64/
#27 0x00007fa08de0f01d in clone () from /lib64/

This assert was seen during a very long sysbench run which was kill -9'ed regularly, then re-started. Hopefully the coredump + a code review show the bug, but will add testcase I used also. PZ first found this bug doing Sysbench runs + ctrl+c.

This bug was seen on a minor-patched mysqld, including patch for completeness;

$ bzr diff
=== modified file 'storage/innobase/srv/'
--- storage/innobase/srv/ 2014-09-25 14:16:07 +0000
+++ storage/innobase/srv/ 2014-11-12 09:40:50 +0000
@@ -1808,6 +1808,8 @@
        export_vars.innodb_row_lock_current_waits =

+fprintf(stderr, "innodb_row_lock_current_waits = %lu\n", (ulong) srv_stats.n_lock_wait_current_count);
        export_vars.innodb_row_lock_time = srv_stats.n_lock_wait_time / 1000;

        if (srv_stats.n_lock_wait_count > 0) {

Tags: qa Edit Tag help
description: updated
Roel Van de Paar (roel11) wrote :

Testcase setup (in my case it took about a night of runtime to trigger);

1. Attached tar is a recent version of sysbench. Extract in home dir, creates ~/sysbenchnew
2. Start server (cd basedir; ~/percona-qa/ 0; ./start)
3. Prep sysbech:
    export LD_PRELOAD=${PWD}/bundle/
    ~/sysbenchnew/sysbench --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=pareto --oltp-table-size=1000000 --mysql-user=root --test=/home/roel/sysbenchnew/oltp.lua --mysql-socket=/sda/Percona-Server-5.6.21-rel70.0-676-debug.Linux.x86_64/socket.sock --mysql-db=test prepare
4. Put the following line which starts two sysbench runs at the same time (notice the &) in a script named "3"
   ~/sysbenchnew/sysbench --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=pareto --oltp-table-size=1000000 --mysql-user=root --test=/home/roel/sysbenchnew/oltp.lua --mysql-socket=/sda/Percona-Server-5.6.21-rel70.0-676-debug.Linux.x86_64/socket.sock --mysql-db=test run &
   ~/sysbenchnew/sysbench --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=pareto --oltp-table-size=1000000 --mysql-user=root --test=/home/roel/sysbenchnew/oltp.lua --mysql-socket=/sda/Percona-Server-5.6.21-rel70.0-676-debug.Linux.x86_64/socket.sock --threads=120 --mysql-db=test run &
5. Make a script named 4 and start it;
   $ cat sysbenchnew_scripts/4
   while(true); do
     ./3 &
     DELAY=$[1 + ( $RANDOM % ( 3600 ) ) ]
     sleep $DELAY
     kill -9 `pidof sysbench`
6. Setup a light monitoring script (maybe not needed, just adding for completeness) and start it (all script in different shells);
   $ cat sysbenchnew_scripts/2
      while(true); do ./bin/mysql -uroot -S/sda/Percona-Server-5.6.21-rel70.0-676-debug.Linux.x86_64/socket.sock test -e"show global status like 'Innodb_row_lock_current_waits'; show global status like 'questions'; show global status like 'queries'; "; sleep 0.2; done
7. Wait a night orso for the crash to happen.

Roel Van de Paar (roel11) wrote :

Ran into the bug again. Bug is quite repeatable using script setup as provided. Adding scripts here to help with setup. Scripts more or less like described above. Edit with paths etc. as needed.

./2 = monitoring script
./3a = sysbench prepare script
./3b = sysbench normal run script
./4 = sysbench normal run script (3b) monitor/kill/restart
./5 = not used for this bug (bug 1395249), but used for bug 1395253

Roel Van de Paar (roel11) wrote :

To combine scripts in #2 with sysbenchnew binary download in #1, following instructions in original report.

Roel Van de Paar (roel11) wrote :

Note the ./2 script now automatically tee's so no need to ./2 | tee 2.log anymore.

Roel Van de Paar (roel11) wrote :

Not sure why bug was marked new. Bug has testcase.

Percona now uses JIRA for bug reports so this bug report is migrated to:

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers