bool trans_check_state(THD*): Assertion `thd->transaction.stmt.is_empty()' failed in debug build

Bug #1234395 reported by Raghavendra D Prabhu
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MySQL patches by Codership
Confirmed
Medium
Unassigned
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Status tracked in 5.6
5.5
Invalid
Undecided
Unassigned
5.6
New
Undecided
Unassigned

Bug Description

BT:

(gdb) bt
#0 0x00007fcff47f40b1 in pthread_kill () from /usr/lib/libpthread.so.0
#1 0x0000000000941cbf in my_write_core (sig=6) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/mysys/stacktrace.c:422
#2 0x00000000006d83ec in handle_fatal_signal (sig=6) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/signal_handler.cc:254
#3 <signal handler called>
#4 0x00007fcff2b843d9 in raise () from /usr/lib/libc.so.6
#5 0x00007fcff2b857d8 in abort () from /usr/lib/libc.so.6
#6 0x00007fcff2b7d506 in __assert_fail_base () from /usr/lib/libc.so.6
#7 0x00007fcff2b7d5b2 in __assert_fail () from /usr/lib/libc.so.6
#8 0x00000000007e7bc5 in trans_check_state (thd=thd@entry=0x7fcfcc0009a0) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc:42
#9 0x00000000007e83a3 in trans_rollback (thd=thd@entry=0x7fcfcc0009a0) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc:347
#10 0x0000000000638f5e in wsrep_rollback (thd=thd@entry=0x7fcfcc0009a0, global_seqno=<optimized out>) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/wsrep_applier.cc:299
#11 0x000000000063995b in wsrep_commit_cb (ctx=ctx@entry=0x7fcfcc0009a0, meta=meta@entry=0x7fcff4a9c9f0, exit=exit@entry=0x7fcff4a9c500, commit=commit@entry=false)
    at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/wsrep_applier.cc:329
#12 0x00007fcff20d6d5d in apply_trx_ws (recv_ctx=recv_ctx@entry=0x7fcfcc0009a0, apply_cb=0x6397cb <wsrep_apply_cb(void*, void const*, unsigned long, wsrep_trx_meta const*)>,
    commit_cb=0x6398ed <wsrep_commit_cb(void*, wsrep_trx_meta const*, bool*, bool)>, trx=..., meta=...) at galera/src/replicator_smm.cpp:61
#13 0x00007fcff20d99b0 in galera::ReplicatorSMM::apply_trx (this=this@entry=0x32e0bd0, recv_ctx=recv_ctx@entry=0x7fcfcc0009a0, trx=trx@entry=0x7fcfcc1b65d0) at galera/src/replicator_smm.cpp:470
#14 0x00007fcff20dbefe in galera::ReplicatorSMM::process_trx (this=0x32e0bd0, recv_ctx=0x7fcfcc0009a0, trx=0x7fcfcc1b65d0) at galera/src/replicator_smm.cpp:1242
#15 0x00007fcff20baa19 in galera::GcsActionSource::dispatch (this=this@entry=0x32e11b0, recv_ctx=recv_ctx@entry=0x7fcfcc0009a0, act=..., exit_loop=@0x7fcff4a9d310: false) at galera/src/gcs_action_source.cpp:118
#16 0x00007fcff20bb90c in galera::GcsActionSource::process (this=0x32e11b0, recv_ctx=0x7fcfcc0009a0, exit_loop=@0x7fcff4a9d310: false) at galera/src/gcs_action_source.cpp:177
#17 0x00007fcff20dc11b in galera::ReplicatorSMM::async_recv (this=0x32e0bd0, recv_ctx=0x7fcfcc0009a0) at galera/src/replicator_smm.cpp:346
#18 0x00007fcff20ecc68 in galera_recv (gh=<optimized out>, recv_ctx=<optimized out>) at galera/src/wsrep_provider.cpp:207
#19 0x0000000000639d10 in wsrep_replication_process (thd=0x7fcfcc0009a0) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/wsrep_thd.cc:264
#20 0x0000000000624a3e in start_wsrep_THD (arg=0x639cbc <wsrep_replication_process(THD*)>) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/mysqld.cc:5380
#21 0x00007fcff47ef0a2 in start_thread () from /usr/lib/libpthread.so.0
#22 0x00007fcff2c3443d in clone () from /usr/lib/libc.so.6

Snip of error log:

===============================================================================================

2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: gcs/src/gcs.c:gcs_fc_stop_end():394: SENDING FC_STOP (local seqno: 198, fc_offset: 0)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():42: Done executing TO isolated action: 165
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():35: Executing TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 24 trx_id: -1 seqnos (l: 174, g: 166, s: 154, d: 165, ts: 192193494313563)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():42: Done executing TO isolated action: 166
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():35: Executing TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 13 trx_id: -1 seqnos (l: 175, g: 167, s: 154, d: 166, ts: 192193495660136)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():42: Done executing TO isolated action: 167
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():35: Executing TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 23 trx_id: -1 seqnos (l: 176, g: 168, s: 158, d: 167, ts: 192193498961729)
2013-10-03 02:05:34 8278 [ERROR] Slave SQL: Error 'Table 'test.table1_innodb_int_autoinc' doesn't exist' on query. Default database: 'test'. Query: 'CREATE TABLE j LIKE `table1_innodb_int_autoinc`', Error_code: 1146
2013-10-03 02:05:34 8278 [Warning] WSREP: RBR event 1 Query apply warning: 1, 168
2013-10-03 02:05:34 8278 [Warning] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():51: Ignoring error for TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 23 trx_id: -1 seqnos (l: 176, g: 168, s: 158, d: 167, ts: 192193498961729)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:process_commit_cut():1282: Got commit cut from GCS: 130
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():35: Executing TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 21 trx_id: -1 seqnos (l: 180, g: 171, s: 160, d: 170, ts: 192193515461822)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():42: Done executing TO isolated action: 171
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: gcs/src/gcs.c:gcs_fc_cont_end():447: SENDING FC_CONT (local seqno: 207, fc_offset: 0)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():35: Executing TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 26 trx_id: -1 seqnos (l: 183, g: 174, s: 164, d: 173, ts: 192193740078179)
2013-10-03 02:05:34 8278 [ERROR] Slave SQL: Error 'Table 'test.table100_innodb_key_pk_parts_2_int_autoinc' doesn't exist' on query. Default database: 'test'. Query: 'CREATE TABLE i LIKE `table100_innodb_key_pk_parts_2_int_autoinc`', Error_code: 1146
2013-10-03 02:05:34 8278 [Warning] WSREP: RBR event 1 Query apply warning: 1, 174
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: gcs/src/gcs.c:gcs_fc_stop_end():394: SENDING FC_STOP (local seqno: 208, fc_offset: 0)
2013-10-03 02:05:34 8278 [Warning] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():51: Ignoring error for TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 26 trx_id: -1 seqnos (l: 183, g: 174, s: 164, d: 173, ts: 192193740078179)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: gcs/src/gcs.c:gcs_fc_cont_end():447: SENDING FC_CONT (local seqno: 208, fc_offset: 0)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():35: Executing TO isolated action: source: 263d2c7f-2ba2-11e3-bb9e-07bbcf9de051 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 19 trx_id: -1 seqnos (l: 184, g: 175, s: 164, d: 174, ts: 192193741457593)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: gcs/src/gcs.c:gcs_fc_stop_end():394: SENDING FC_STOP (local seqno: 209, fc_offset: 0)
2013-10-03 02:05:34 8278 [Note] [Debug] WSREP: galera/src/replicator_smm.cpp:apply_trx_ws():42: Done executing TO isolated action: 175
2013-10-03 02:05:34 8278 [ERROR] Slave SQL: Could not execute Update_rows event on table test.table1_innodb_int; Can't find record in 'table1_innodb_int', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log FIRST, end_log_pos 189, Error_code: 1032
2013-10-03 02:05:34 8278 [Warning] WSREP: RBR event 3 Update_rows apply warning: 120, 176
mysqld: /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc:42: bool trans_check_state(THD*): Assertion `thd->transaction.stmt.is_empty()' failed.
20:35:34 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=3
max_threads=153
thread_count=3
connection_count=3
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 69207 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7fcfcc0009a0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7fcff4a9daf0 thread_stack 0x40000
/pxc56/bin/mysqld(my_print_stacktrace+0x2c)[0x942498]
/pxc56/bin/mysqld(handle_fatal_signal+0x304)[0x6d8248]
/usr/lib/libpthread.so.0(+0xf870)[0x7fcff47f6870]
/usr/lib/libc.so.6(gsignal+0x39)[0x7fcff2b843d9]
/usr/lib/libc.so.6(abort+0x148)[0x7fcff2b857d8]
/usr/lib/libc.so.6(+0x2e506)[0x7fcff2b7d506]
/usr/lib/libc.so.6(+0x2e5b2)[0x7fcff2b7d5b2]
/pxc56/bin/mysqld(_Z17trans_check_stateP3THD+0x65)[0x7e7bc5]
/pxc56/bin/mysqld(_Z14trans_rollbackP3THD+0xd0)[0x7e83a3]
/pxc56/bin/mysqld[0x638f5e]
/pxc56/bin/mysqld(_Z15wsrep_commit_cbPvPK14wsrep_trx_metaPbb+0x6e)[0x63995b]
/pxc56/lib/libgalera_smm.so(+0x1b4d5d)[0x7fcff20d6d5d]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM9apply_trxEPvPNS_9TrxHandleE+0xb0)[0x7fcff20d99b0]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM11process_trxEPvPNS_9TrxHandleE+0x10e)[0x7fcff20dbefe]
/pxc56/lib/libgalera_smm.so(_ZN6galera15GcsActionSource8dispatchEPvRK10gcs_actionRb+0x1b9)[0x7fcff20baa19]
/pxc56/lib/libgalera_smm.so(_ZN6galera15GcsActionSource7processEPvRb+0x4c)[0x7fcff20bb90c]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_recvEPv+0x6b)[0x7fcff20dc11b]
/pxc56/lib/libgalera_smm.so(galera_recv+0x18)[0x7fcff20ecc68]
/pxc56/bin/mysqld[0x639d10]
/pxc56/bin/mysqld(start_wsrep_THD+0x39e)[0x624a3e]
/usr/lib/libpthread.so.0(+0x80a2)[0x7fcff47ef0a2]
/usr/lib/libc.so.6(clone+0x6d)[0x7fcff2c3443d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): Connection ID (thread ID): 2
Status: NOT_KILLED

You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.
Writing a core file

==========================================================================================

Attaching error logs and thread apply all bt full.

Got it with:

perl runall-new.pl --basedir=/pxc56 --galera=ms --mysqld=--skip-performance-schema --reporter=Shutdown,Backtrace,ErrorLog,ErrorLogAlarm,Deadlock --grammar=conf/galera/galera_stress-56.yy --gendata=conf/galera/galera_stress-56.zz --threads=16 --queries=50000 --mysqld=--wsrep-provider=/pxc56/lib/libgalera_smm.so --seed=time --mysqld=--innodb_flush_method=O_DIRECT --mysqld=--wsrep-debug=1 --mysqld=--wsrep_provider_options='debug=1'

It is the node2 which crashed.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

The error log to be checked is node1.err (node0.err is very big - 136M, check it only if required).

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :
Download full text (8.0 KiB)

bt full (for brevity):

#0 0x00007fcff47f40b1 in pthread_kill () from /usr/lib/libpthread.so.0
No symbol table info available.
#1 0x0000000000941cbf in my_write_core (sig=6) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/mysys/stacktrace.c:422
No locals.
#2 0x00000000006d83ec in handle_fatal_signal (sig=6) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/signal_handler.cc:254
        curr_time = <optimized out>
        thrs = <optimized out>
        mins = <optimized out>
        hrs_buf = "20"
        hrs = <optimized out>
        mins_buf = "35"
        secs_buf = "34"
        tmins = <optimized out>
        secs = <optimized out>
        thd = 0x7fcfcc0009a0
#3 <signal handler called>
No symbol table info available.
#4 0x00007fcff2b843d9 in raise () from /usr/lib/libc.so.6
No symbol table info available.
#5 0x00007fcff2b857d8 in abort () from /usr/lib/libc.so.6
No symbol table info available.
#6 0x00007fcff2b7d506 in __assert_fail_base () from /usr/lib/libc.so.6
No symbol table info available.
#7 0x00007fcff2b7d5b2 in __assert_fail () from /usr/lib/libc.so.6
No symbol table info available.
#8 0x00000000007e7bc5 in trans_check_state (thd=thd@entry=0x7fcfcc0009a0) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc:42
        xa_state = XA_NOTR
        _db_stack_frame_ = {
          func = 0xc63cee "trans_rollback",
          file = 0xc63db8 "/media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc",
          level = 2147483651,
          prev = 0x7fcff4a9bee0
        }
        __PRETTY_FUNCTION__ = "bool trans_check_state(THD*)"
#9 0x00000000007e83a3 in trans_rollback (thd=thd@entry=0x7fcfcc0009a0) at /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc:347
        res = <optimized out>
        _db_stack_frame_ = {
          func = 0xc07539 "wsrep_replication_process",
          file = 0xc071f0 "/media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/wsrep_thd.cc",
          level = 2147483650,
          prev = 0x7fcff4a9d680
        }
        buf1 = "\001-\000\314\317\177\000\000\021-\000\314\317\177\000\000\177-\000\314\317\177\000\000\001-\000\314\317\177\000\000\177-\000\314\317\177", '\000' <repeats 42 times>, "\360ɩ\364\004\000\000\000@E\000\314\317\177\000\000\360\277\251\364\317\177\000\000h\346\251\364\317\177\000\000\220\277\251\364\317\177\000\000(\023\275\000\000\000\000\000\377\377\377\377\377\377\377\377\t\017\275\000\000\000\000\000\340\277\251\364\317\177\000\000s%\275\000\000\000\000\000\377\377\377\377\000\000\000\000p\001\000\000\002\000\000\000\b\257\021\314\317\177\000\000P$\000\314\317\177\000\000\001-\000\314\317\177\000\000\300p\300"...
        buf2 = "\002\000\000\200\317\177\000\000\200֩\364\317\177\000\000\060\300\251\364\317\177\000\000\240\t\000\314\317\177\000\000\001-\000\314\317\177\000\000\340\300\251\364\317\177\000\000@\300\251\364\317\177\000\000\016\306q\000\000\000\000\000\220\300\251\364\317\177\000\000\256\337q\000\000\000\000\000\002\020\000\000\000\000\000\000\300p\300\000\000\000\000\000\060\301\251\364\317\177\000\0...

Read more...

summary: bool trans_check_state(THD*): Assertion
- `thd->transaction.stmt.is_empty()' failed.
+ `thd->transaction.stmt.is_empty()' failed in debug build
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

This is (probably) due to https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1212247 not ported to 5.6 yet

since disabling TEMPORARY tables doesn't crash with this error.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Regarding #4, actually crashed with TEMPORARY tables disabled as well.

with

perl runall-new.pl --basedir=/pxc56 --galera=ms --mysqld=--skip-performance-schema --reporter=Shutdown,Backtrace,ErrorLog,ErrorLogAlarm,Deadlock --grammar=conf/galera/galera_stress-56.yy --gendata=conf/galera/galera_stress-56.zz --threads=16 --queries=50000 --mysqld=--wsrep-provider=/pxc56/lib/libgalera_smm.so --seed=time --mysqld=--innodb_flush_method=O_DIRECT --mysqld=--lock_wait_timeout=50

Changed in codership-mysql:
assignee: nobody → Seppo Jaakola (seppo-jaakola)
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :
Download full text (3.4 KiB)

Able to confirm this with codership-5.6 latest HEAD build as well.

mysqld: /media/Tintin/Work/code/codership/codership-5.6/sql/transaction.cc:42: bool trans_check_state(THD*): Assertion `thd->transaction.stmt.is_empty()' failed.
13:15:56 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=3
max_threads=151
thread_count=3
connection_count=3
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 68310 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f81200009a0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f8145db2ae8 thread_stack 0x40000
/chip/bin/mysqld(my_print_stacktrace+0x2c)[0x946e9a]
/chip/bin/mysqld(handle_fatal_signal+0x2f7)[0x67f607]
/usr/lib/libpthread.so.0(+0xf870)[0x7f8145b0b870]
/usr/lib/libc.so.6(gsignal+0x39)[0x7f81440a13d9]
/usr/lib/libc.so.6(abort+0x148)[0x7f81440a27d8]
/usr/lib/libc.so.6(+0x2e506)[0x7f814409a506]
/usr/lib/libc.so.6(+0x2e5b2)[0x7f814409a5b2]
/chip/bin/mysqld(_Z17trans_check_stateP3THD+0x5a)[0x7ab77a]
/chip/bin/mysqld(_Z14trans_rollbackP3THD+0x29e)[0x7ac53f]
/chip/bin/mysqld(_Z15wsrep_commit_cbPvPK14wsrep_trx_metaPbb+0x184)[0x5b0e4e]
/pxc56/lib/libgalera_smm.so(+0x1b4d5d)[0x7f81435f3d5d]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM9apply_trxEPvPNS_9TrxHandleE+0xb0)[0x7f81435f69b0]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM11process_trxEPvPNS_9TrxHandleE+0x10e)[0x7f81435f8efe]
/pxc56/lib/libgalera_smm.so(_ZN6galera15GcsActionSource8dispatchEPvRK10gcs_actionRb+0x1b9)[0x7f81435d7a19]
/pxc56/lib/libgalera_smm.so(_ZN6galera15GcsActionSource7processEPvRb+0x4c)[0x7f81435d890c]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_recvEPv+0x6b)[0x7f81435f911b]
/pxc56/lib/libgalera_smm.so(galera_recv+0x18)[0x7f8143609c68]
/chip/bin/mysqld[0x5b1188]
/chip/bin/mysqld(start_wsrep_THD+0x5c2)[0x598a3f]
/usr/lib/libpthread.so.0(+0x80a2)[0x7f8145b040a2]
/usr/lib/libc.so.6(clone+0x6d)[0x7f814415143d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): Connection ID (thread ID): 2
Status: NOT_KILLED

The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file

==================================================

It crashes quite rapidly after start of test (with both PXC and codership binaries) - http://paste.wnohang.net/cdd43e

perl runall-new.pl --basedir=/chip --galera=ms --mysqld=--skip-performance-schema --reporter=Shutdown,Backtrace,ErrorLog,ErrorLogAlarm,Deadlock ...

Read more...

Changed in codership-mysql:
assignee: Seppo Jaakola (seppo-jaakola) → Teemu Ollakka (teemu-ollakka)
status: New → In Progress
importance: Undecided → High
milestone: none → 5.6.14-24.1
Revision history for this message
Teemu Ollakka (teemu-ollakka) wrote :

Potential fix pushed in: http://bazaar.launchpad.net/~codership/codership-mysql/5.6/revision/3892

This fixes inconsistency caused by multi-row insert when binlog was turned off. The assertion in title was hit after event applying failed due to inconsistency.

Changed in codership-mysql:
assignee: Teemu Ollakka (teemu-ollakka) → nobody
status: In Progress → Confirmed
importance: High → Medium
milestone: 5.6.14-25.1 → 5.6.14-25.2
Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :

Not hit this assertion after recent builds.

Revision history for this message
Raghavendra D Prabhu (raghavendra-prabhu) wrote :
Download full text (3.4 KiB)

Hit this again:

2013-12-14 22:44:04 13854 [ERROR] Slave SQL: Could not execute Update_rows event on table test.table100_innodb_key_pk_parts_2_int_autoinc; Can't find record in 'table100_innodb_key_pk_parts_2_int_autoinc', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log FIRST, end_log_pos 619, Error_code: 1032
2013-12-14 22:44:04 13854 [Warning] WSREP: RBR event 5 Update_rows apply warning: 120, 1423
mysqld: /media/Tintin/Work/code/percona-xtradb-cluster/pxc56/Percona-Server/sql/transaction.cc:42: bool trans_check_state(THD*): Assertion `thd->transaction.stmt.is_empty()' failed.
17:14:04 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=3
max_threads=153
thread_count=3
connection_count=3
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 69229 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f17fc0009a0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f1823997af0 thread_stack 0x40000
/pxc56/bin/mysqld(my_print_stacktrace+0x2c)[0x949f18]
/pxc56/bin/mysqld(handle_fatal_signal+0x304)[0x6db53c]
/usr/lib/libpthread.so.0(+0xf870)[0x7f18236f4870]
/usr/lib/libc.so.6(gsignal+0x39)[0x7f1821a80319]
/usr/lib/libc.so.6(abort+0x148)[0x7f1821a81718]
/usr/lib/libc.so.6(+0x2e446)[0x7f1821a79446]
/usr/lib/libc.so.6(+0x2e4f2)[0x7f1821a794f2]
/pxc56/bin/mysqld(_Z17trans_check_stateP3THD+0x65)[0x7edce5]
/pxc56/bin/mysqld(_Z14trans_rollbackP3THD+0xd0)[0x7ee4ee]
/pxc56/bin/mysqld[0x63b975]
/pxc56/bin/mysqld(_Z15wsrep_commit_cbPvjPK14wsrep_trx_metaPbb+0x6e)[0x63c2df]
/pxc56/lib/libgalera_smm.so(+0x1b15b0)[0x7f1820fd55b0]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM9apply_trxEPvPNS_9TrxHandleE+0xb0)[0x7f1820fd7610]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM11process_trxEPvPNS_9TrxHandleE+0x10e)[0x7f1820fda14e]
/pxc56/lib/libgalera_smm.so(_ZN6galera15GcsActionSource8dispatchEPvRK10gcs_actionRb+0x1b9)[0x7f1820fb9cd9]
/pxc56/lib/libgalera_smm.so(_ZN6galera15GcsActionSource7processEPvRb+0x4c)[0x7f1820fbabcc]
/pxc56/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_recvEPv+0x6b)[0x7f1820fda36b]
/pxc56/lib/libgalera_smm.so(galera_recv+0x18)[0x7f1820fea4a8]
/pxc56/bin/mysqld[0x63c798]
/pxc56/bin/mysqld(start_wsrep_THD+0x39c)[0x6262e0]
/usr/lib/libpthread.so.0(+0x80a2)[0x7f18236ed0a2]
/usr/lib/libc.so.6(clone+0x6d)[0x7f1821b303dd]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Q...

Read more...

Changed in codership-mysql:
milestone: 5.6.15-25.2 → 5.6.16-25.3
Changed in codership-mysql:
milestone: 5.6.16-25.5 → 5.6.17-25.6
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-1462

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.