Diagnostics_area::set_error_status(THD*, uint, const char*, const char*): Assertion `! is_set() || can_overwrite_status' failed | CTAS-based | lp:1162421 regression

Bug #1208937 reported by Raghavendra D Prabhu on 2013-08-06
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MySQL patches by Codership
Undecided
Unassigned
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Status tracked in 5.6
5.5
Incomplete
High
Unassigned
5.6
Incomplete
High
Unassigned

Bug Description

Cmdline:

perl runall-new.pl --basedir=/chip --galera=ms --grammar=/pxc/randgen/conf/pxc/percona_qa.yy --gendata=/pxc/randgen/conf/pxc/percona_qa.zz --threads=10 --queries=1000000 --mysqld=--wsrep-provider=/pxc/lib/libgalera_smm.so

Log:

# 2013-08-06T23:07:34 [56304] Query: REPLACE INTO `table500_innodb_compressed` ( `col_text_latin1_unique` ) VALUES ( '2006' ) failed: 2013 Lost connection to MySQL server during query
sh: line 1: 55723 Aborted (core dumped) "/chip/bin/mysqld" "--no-defaults" "--basedir=/chip" "--datadir=/chip/mysql-test/var/node0/data" "--lc-messages-dir=/chip/share/mysql" "--character-sets-dir=/chip/share/mysql/charsets" "--tmpdir=/chip/mysql-test/var/node0/tmp" "--core-file" "--max-allowed-packet=128Mb" "--port=19300" "--socket=/chip/mysql-test/var/node0/mysql.sock" "--pid-file=/chip/mysql-test/var/node0/mysql.pid" "--general-log" "--general-log-file=/chip/mysql-test/var/node0/mysql.log" --wsrep_sst_method=rsync --innodb_autoinc_lock_mode=2 --default-storage-engine=InnoDB --innodb_locks_unsafe_for_binlog=1 --binlog-format=row --wsrep_cluster_address=gcomm://?gmcast.listen_addr=tcp://127.0.0.1:4800 --wsrep-provider=/pxc/lib/libgalera_smm.so >> "/chip/mysql-test/var/node0/mysql.err" 2>&1
# 2013-08-06T23:07:34 [56304] Server crash reported at dsn dbi:mysql:host=127.0.0.1:port=19300:user=root:database=test
# 2013-08-06T23:07:34 [56306] Query: DROP VIEW IF EXISTS s failed: 2013 Lost connection to MySQL server during query
# 2013-08-06T23:07:34 [56306] Server crash reported at dsn dbi:mysql:host=127.0.0.1:port=19300:user=root:database=test
# 2013-08-06T23:07:34 [56306] Query: CREATE TABLE s SELECT * FROM `table100_innodb_compressed_int` failed: 2006 MySQL server has gone away
# 2013-08-06T23:07:34 [56306] Server crash reported at dsn dbi:mysql:host=127.0.0.1:port=19300:user=root:database=test
# 2013-08-06T23:07:34 [56298] Query: INSERT INTO n SELECT * FROM `table500_innodb_dynamic` failed: 2006 MySQL server has gone away
# 2013-08-06T23:07:34 [56298] Server crash reported at dsn dbi:mysql:host=127.0.0.1:port=19300:user=root:database=test
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56302...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56304...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56287...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56279...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56298...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56308...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56296...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56294...
# 2013-08-06T23:07:34 [55356] Killing remaining worker process with pid 56306...
# 2013-08-06T23:07:35 [55356] Killing periodic reporting process with pid 56278...
# 2013-08-06T23:07:35 [55356] Kill GenTest::ErrorFilter(56277)
# 2013-08-06T23:07:35 [55356] Server crash reported, initiating post-crash analysis...
# 2013-08-06T23:07:35 [55356] The last 100 lines from /chip/mysql-test/var/node0/data//../mysql.err :
} joined {
} left {
} partitioned {
})
130806 23:03:53 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
130806 23:03:53 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 5d54d93b-febe-11e2-a67a-26694b4e38f9
130806 23:03:53 [Note] WSREP: STATE EXCHANGE: sent state msg: 5d54d93b-febe-11e2-a67a-26694b4e38f9
130806 23:03:53 [Note] WSREP: STATE EXCHANGE: got state msg: 5d54d93b-febe-11e2-a67a-26694b4e38f9 from 0 (Archie)
130806 23:03:54 [Note] WSREP: STATE EXCHANGE: got state msg: 5d54d93b-febe-11e2-a67a-26694b4e38f9 from 1 (Archie)
130806 23:03:54 [Note] WSREP: Quorum results:
        version = 2,
        component = PRIMARY,
        conf_id = 1,
        members = 1/2 (joined/total),
        act_id = 0,
        last_appl. = 0,
        protocols = 0/4/2 (gcs/repl/appl),
        group UUID = 5c2397fd-febe-11e2-9717-3a76c00c2503
130806 23:03:54 [Note] WSREP: Flow-control interval: [23, 23]
130806 23:03:54 [Note] WSREP: New cluster view: global state: 5c2397fd-febe-11e2-9717-3a76c00c2503:0, view# 2: Primary, number of nodes: 2, my index: 0, protocol version 2
130806 23:03:54 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
130806 23:03:54 [Note] WSREP: Assign initial position for certification: 0, protocol version: 2
130806 23:03:56 [Note] WSREP: Node 1 (Archie) requested state transfer from '*any*'. Selected 0 (Archie)(SYNCED) as donor.
130806 23:03:56 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 0)
130806 23:03:56 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
130806 23:03:56 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'donor' --address '192.168.0.113:4444/rsync_sst' --auth '(null)' --socket '/chip/mysql-test/var/node0/mysql.sock' --datadir '/chip/mysql-test/var/node0/data/' --defaults-file '' --gtid '5c2397fd-febe-11e2-9717-3a76c00c2503:0''
130806 23:03:56 [Note] WSREP: sst_donor_thread signaled with 0
130806 23:03:56 [Note] WSREP: Flushing tables for SST...
130806 23:03:56 [Note] WSREP: Provider paused at 5c2397fd-febe-11e2-9717-3a76c00c2503:0
130806 23:03:56 [Note] WSREP: Tables flushed.
130806 23:03:57 [Note] WSREP: Provider resumed.
130806 23:03:57 [Note] WSREP: 0 (Archie): State transfer to 1 (Archie) complete.
130806 23:03:57 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 0)
130806 23:03:57 [Note] WSREP: Member 0 (Archie) synced with group.
130806 23:03:57 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
130806 23:03:57 [Note] WSREP: Synchronized with group, ready for connections
130806 23:03:57 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
130806 23:04:00 [Note] WSREP: 1 (Archie): State transfer from 0 (Archie) complete.
130806 23:04:00 [Note] WSREP: Member 1 (Archie) synced with group.
mysqld: /media/Tintin/Work/code/codership/5.5.30/sql/sql_error.cc:422: void Diagnostics_area::set_error_status(THD*, uint, const char*, const char*): Assertion `! is_set() || can_overwrite_status' failed.
17:37:34 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=13
max_threads=151
thread_count=13
connection_count=13
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338557 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x35c8670
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7fc6f8588e68 thread_stack 0x40000
/chip/bin/mysqld(my_print_stacktrace+0x2c)[0x80077a]
/chip/bin/mysqld(handle_fatal_signal+0x2f9)[0x6c253d]
/usr/lib/libpthread.so.0(+0xf0e0)[0x7fc70b5290e0]
/usr/lib/libc.so.6(gsignal+0x39)[0x7fc709cd81c9]
/usr/lib/libc.so.6(abort+0x148)[0x7fc709cd95c8]
/usr/lib/libc.so.6(+0x2e356)[0x7fc709cd1356]
/usr/lib/libc.so.6(+0x2e402)[0x7fc709cd1402]
/chip/bin/mysqld(_ZN16Diagnostics_area16set_error_statusEP3THDjPKcS3_+0x65)[0x57ea8f]
/chip/bin/mysqld(_ZN3THD15raise_conditionEjPKcN11MYSQL_ERROR18enum_warning_levelES1_+0x209)[0x56f6c9]
/chip/bin/mysqld(my_message_sql+0x11b)[0x51077a]
/chip/bin/mysqld(my_error+0x178)[0x7f6c38]
/chip/bin/mysqld(_Z15ha_commit_transP3THDb+0x421)[0x6c5ed5]
/chip/bin/mysqld(_Z17trans_commit_stmtP3THD+0x76)[0x65e548]
/chip/bin/mysqld(_ZN13select_create8send_eofEv+0x4e)[0x583d16]
/chip/bin/mysqld[0x5d310c]
/chip/bin/mysqld(_ZN4JOIN4execEv+0x23bb)[0x5e67d1]
/chip/bin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x670)[0x5e0a3d]
/chip/bin/mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x134)[0x5e0c34]
/chip/bin/mysqld(_Z21mysql_execute_commandP3THD+0x14a0)[0x59def8]
/chip/bin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x142)[0x5a52d8]
/chip/bin/mysqld[0x5a5deb]
/chip/bin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x994)[0x5a6cf2]
/chip/bin/mysqld(_Z10do_commandP3THD+0x679)[0x5a8bc3]
/chip/bin/mysqld(_Z24do_handle_one_connectionP3THD+0x13a)[0x64fbcd]
/chip/bin/mysqld(handle_one_connection+0x49)[0x64fdbd]
/usr/lib/libpthread.so.0(+0x7dd2)[0x7fc70b521dd2]
/usr/lib/libc.so.6(clone+0x6d)[0x7fc709d88cdd]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fc6a0004c10): CREATE TABLE u SELECT * FROM `table100_innodb_compressed_int`
Connection ID (thread ID): 13
Status: NOT_KILLED

The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file
# 2013-08-06T23:07:35 [55356] datadir is /chip/mysql-test/var/node0/data/
# 2013-08-06T23:07:35 [55356] binary is /chip/../chip/bin/mysqld
# 2013-08-06T23:07:35 [55356] bindir is /chip/../chip/bin
# 2013-08-06T23:07:35 [55356] WARNING: Core file not found!
"/media/Tintin/Qemu/data/pxc/randgen.galera" is not a core dump: Is a directory
backtrace.gdb:19: Error in sourced command file:
No stack.
# 2013-08-06T23:07:37 [55356] Whether to confirm potentially dangerous operations is on.
"/media/Tintin/Qemu/data/pxc/randgen.galera" is not a core dump: Is a directory
# 2013-08-06T23:07:38 [55356] Whether to confirm potentially dangerous operations is on.
# 2013-08-06T23:07:38 [55356] Test completed with failure status STATUS_SERVER_CRASHED (101)
# 2013-08-06T23:07:38 [55356] GenTest exited with exit status STATUS_SERVER_CRASHED (101)
# 2013-08-06T23:07:38 [55356] Stopping server on port 19300
# 2013-08-06T23:07:38 [55356] Stale connection. Reconnecting
# 2013-08-06T23:07:38 [55356] Reconnect failed due to 2003:Can't connect to MySQL server on '127.0.0.1' (111)
# 2013-08-06T23:07:38 [55356] Server would not shut down properly. Terminate it
# 2013-08-06T23:07:38 [55356] Terminated process 55723
# 2013-08-06T23:07:38 [55356] Stopping server on port 19301
# 2013-08-06T23:07:38 [55356] Stale connection. Reconnecting
# 2013-08-06T23:07:38 [55356] [55356] runall-new.pl will exit with exit status STATUS_SERVER_CRASHED (101)
perl runall-new.pl --basedir=/chip --galera=ms --threads=10 36.48s user 1.07s system 15% cpu 4:01.29 total

percona_qa.yy https://gist.github.com/e2ce70afaf12b5d09758

percona_qa.zz https://gist.github.com/f91a4e625f8526566c1d

Full log

Repeatable with debug builds of both PXC and codership-mysql.

Also, reported earlier here https://mariadb.atlassian.net/browse/MDEV-542 (but fixed, however, repeating again).

Alex Yurchenko (ayurchen) wrote :

Does not seem to be a regression. Just anther bug.

@Alex,

I presumed it is a regression because the earlier bug also fixed the CTAS based Diagnostics_area crash. I will note this in bug title as well.

summary: - lp:1052002 regression
+ Diagnostics_area::set_error_status(THD*, uint, const char*, const
+ char*): Assertion `! is_set() || can_overwrite_status' failed | CTAS-
+ based | Possible lp:1052002 regression
tags: added: ctas
Download full text (5.2 KiB)

I can still reproduce this:

(gdb) bt
#0 0x00007f131288d0b1 in pthread_kill () from /usr/lib/libpthread.so.0
#1 0x0000000000815ba8 in my_write_core (sig=6) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/mysys/stacktrace.c:433
#2 0x00000000006d5946 in handle_fatal_signal (sig=6) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/signal_handler.cc:250
#3 <signal handler called>
#4 0x00007f131103b3d9 in raise () from /usr/lib/libc.so.6
#5 0x00007f131103c7d8 in abort () from /usr/lib/libc.so.6
#6 0x00007f1311034506 in __assert_fail_base () from /usr/lib/libc.so.6
#7 0x00007f13110345b2 in __assert_fail () from /usr/lib/libc.so.6
#8 0x0000000000582e1f in Diagnostics_area::set_error_status (this=0x2eec9f0, thd=thd@entry=0x2ee9970, sql_errno_arg=sql_errno_arg@entry=1213,
    message_arg=message_arg@entry=0x7f12e3166610 "Deadlock found when trying to get lock; try restarting transaction", sqlstate=sqlstate@entry=0xaddd27 "40001")
    at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_error.cc:422
#9 0x0000000000572319 in THD::raise_condition (this=this@entry=0x2ee9970, sql_errno=sql_errno@entry=1213, sqlstate=0xaddd27 "40001", sqlstate@entry=0x0, level=<optimized out>,
    level@entry=MYSQL_ERROR::WARN_LEVEL_ERROR, msg=msg@entry=0x7f12e3166610 "Deadlock found when trying to get lock; try restarting transaction")
    at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_class.cc:1434
#10 0x0000000000516658 in my_message_sql (error=1213, str=<optimized out>, MyFlags=0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/mysqld.cc:2965
#11 0x000000000080c1a8 in my_error (nr=1213, MyFlags=0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/mysys/my_error.c:96
#12 0x00000000006d9377 in ha_commit_trans (thd=thd@entry=0x2ee9970, all=all@entry=false) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/handler.cc:1393
#13 0x000000000066a450 in trans_commit_stmt (thd=0x2ee9970) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/transaction.cc:380
#14 0x0000000000588064 in select_create::send_eof (this=0x7f1290005b08) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_insert.cc:4128
#15 0x00000000005d822e in do_select (join=join@entry=0x7f1290030ac0, fields=fields@entry=0x2eebf40, table=table@entry=0x0, procedure=0x0)
    at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_select.cc:11663
#16 0x00000000005ec091 in JOIN::exec (this=this@entry=0x7f1290030ac0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_select.cc:2428
#17 0x00000000005e6286 in mysql_select (thd=thd@entry=0x2ee9970, rref_pointer_array=rref_pointer_array@entry=0x2eec018, tables=0x7f1290005410, wild_num=1, fields=..., conds=<optimized out>, og_num=0, order=0x0,
    group=0x0, having=0x0, proc_param=0x0, select_options=select_options@entry=2416184064, result=result@entry=0x7f1290005b08, unit=unit@entry=0x2eeb810, select_lex=select_lex@entry=0x2eebe30)
    at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Perc...

Read more...

Actually, it is a regression of https://bugs.launchpad.net/codership-mysql/+bug/1162421 which was fixed precisely for this.

==========================================

(gdb) frame 8
#8 0x0000000000582e1f in Diagnostics_area::set_error_status (this=0x2eec9f0, thd=thd@entry=0x2ee9970, sql_errno_arg=sql_errno_arg@entry=1213,
    message_arg=message_arg@entry=0x7f12e3166610 "Deadlock found when trying to get lock; try restarting transaction", sqlstate=sqlstate@entry=0xaddd27 "40001")
    at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_error.cc:422
422 DBUG_ASSERT(! is_set() || can_overwrite_status);
(gdb) print thd->wsrep_conflict_state
$1 = CERT_FAILURE

============================================

Attaching thd.txt (containing thd dump).

summary: Diagnostics_area::set_error_status(THD*, uint, const char*, const
char*): Assertion `! is_set() || can_overwrite_status' failed | CTAS-
- based | Possible lp:1052002 regression
+ based | lp:1162421 regression

Correction:

 (gdb) print thd->wsrep_conflict_state
$1 = CERT_FAILURE

is actually

============================================

Breakpoint 1, select_create::send_eof (this=0x7fdaa0042ad8) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_insert.cc:4115
4115 {
(gdb) list
4110 DBUG_VOID_RETURN;
4111 }
4112
4113
4114 bool select_create::send_eof()
4115 {
4116 bool tmp=select_insert::send_eof();
4117 if (tmp)
4118 abort_result_set();
4119 else
(gdb) step
4116 bool tmp=select_insert::send_eof();
(gdb) next
[New Thread 0x7fdac8349700 (LWP 14010)]
4117 if (tmp)
(gdb)
4126 if (!table->s->tmp_table)
(gdb)
4128 trans_commit_stmt(thd);
(gdb) print thd->wsrep_conflict_state
$1 = NO_CONFLICT
(

=============================================

which is why it is not getting trapped in earlier send_eof.

I guess NO_CONFLICT should also be added to that conditonal?

Download full text (5.2 KiB)

It changed from NO_CONFLICT to CERT_FAILURE later in the stack:

Breakpoint 2, wsrep_run_wsrep_commit (thd=thd@entry=0x20572f0, hton=hton@entry=0x188bf30, all=all@entry=true) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/wsrep_hton.cc:271
271 {
$37 = NO_CONFLICT
#0 wsrep_run_wsrep_commit (thd=thd@entry=0x20572f0, hton=hton@entry=0x188bf30, all=all@entry=true) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/wsrep_hton.cc:271
#1 0x0000000000683258 in wsrep_prepare (hton=0x188bf30, thd=0x20572f0, all=<optimized out>) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/wsrep_hton.cc:156
#2 0x00000000006d9322 in ha_commit_trans (thd=thd@entry=0x20572f0, all=all@entry=true) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/handler.cc:1384
#3 0x000000000066a1cc in trans_commit_implicit (thd=0x20572f0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/transaction.cc:248
#4 0x000000000058806d in select_create::send_eof (this=0x7f0828005ac8) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_insert.cc:4129
#5 0x00000000005d822e in do_select (join=join@entry=0x7f082801aa10, fields=fields@entry=0x20598c0, table=table@entry=0x0, procedure=0x0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_select.cc:11663
#6 0x00000000005ec091 in JOIN::exec (this=this@entry=0x7f082801aa10) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_select.cc:2428
#7 0x00000000005e6286 in mysql_select (thd=thd@entry=0x20572f0, rref_pointer_array=rref_pointer_array@entry=0x2059998, tables=0x7f08280053d0, wild_num=1, fields=..., conds=<optimized out>, og_num=0, order=0x0, group=0x0, having=0x0, proc_param=0x0, select_options=select_options@entry=2416708096, result=result@entry=0x7f0828005ac8, unit=unit@entry=0x2059190, select_lex=select_lex@entry=0x20597b0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_select.cc:2647
#8 0x00000000005e64c4 in handle_select (thd=thd@entry=0x20572f0, lex=lex@entry=0x20590d0, result=result@entry=0x7f0828005ac8, setup_tables_done_option=setup_tables_done_option@entry=0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_select.cc:315
#9 0x00000000005a13b2 in mysql_execute_command (thd=thd@entry=0x20572f0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_parse.cc:3078
#10 0x00000000005a8c73 in mysql_parse (thd=thd@entry=0x20572f0, rawbuf=rawbuf@entry=0x7f0828004c10 "CREATE TABLE i AS SELECT * FROM `table1000_innodb_int_autoinc`", length=length@entry=63, parser_state=parser_state@entry=0x7f088c1415e0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_parse.cc:6527
#11 0x00000000005a98e3 in wsrep_mysql_parse (thd=thd@entry=0x20572f0, rawbuf=0x7f0828004c10 "CREATE TABLE i AS SELECT * FROM `table1000_innodb_int_autoinc`", length=63, parser_state=parser_state@entry=0x7f088c1415e0) at /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_parse.cc:6332
#12 0x00000000005aaee6 in dispatch_command...

Read more...

Full traces.

There seems to be a merge regression in PXC code for this., testing after resolving it.

The issue is still replicable, this is because

in select_insert::send_eof there is ::my_ok(thd, row_count, id, buff); which ensures that ! is_set() fails.

Download full text (10.8 KiB)

Able to replicate this issue with:

#0 0x00007fa17e45a0b1 in pthread_kill () from /usr/lib/libpthread.so.0
#1 0x0000000000804748 in my_write_core (sig=6) at /media/Tintin/Work/code/codership/5.5.30/mysys/stacktrace.c:433
#2 0x00000000006c69f8 in handle_fatal_signal (sig=6) at /media/Tintin/Work/code/codership/5.5.30/sql/signal_handler.cc:247
#3 <signal handler called>
#4 0x00007fa17cc083d9 in raise () from /usr/lib/libc.so.6
#5 0x00007fa17cc097d8 in abort () from /usr/lib/libc.so.6
#6 0x00007fa17cc01506 in __assert_fail_base () from /usr/lib/libc.so.6
#7 0x00007fa17cc015b2 in __assert_fail () from /usr/lib/libc.so.6
#8 0x000000000057fc8f in Diagnostics_area::set_error_status (this=0x271a650, thd=thd@entry=0x27178d0, sql_errno_arg=sql_errno_arg@entry=1213,
    message_arg=message_arg@entry=0x7fa16c60f6d0 "Deadlock found when trying to get lock; try restarting transaction", sqlstate=sqlstate@entry=0xa7ae27 "40001")
    at /media/Tintin/Work/code/codership/5.5.30/sql/sql_error.cc:422
#9 0x000000000056fc39 in THD::raise_condition (this=this@entry=0x27178d0, sql_errno=sql_errno@entry=1213, sqlstate=0xa7ae27 "40001", sqlstate@entry=0x0, level=<optimized out>,
    level@entry=MYSQL_ERROR::WARN_LEVEL_ERROR, msg=msg@entry=0x7fa16c60f6d0 "Deadlock found when trying to get lock; try restarting transaction") at /media/Tintin/Work/code/codership/5.5.30/sql/sql_class.cc:1336
#10 0x0000000000510e46 in my_message_sql (error=1213, str=<optimized out>, MyFlags=0) at /media/Tintin/Work/code/codership/5.5.30/sql/mysqld.cc:2937
#11 0x00000000007fb3a8 in my_error (nr=1213, MyFlags=0) at /media/Tintin/Work/code/codership/5.5.30/mysys/my_error.c:96
#12 0x00000000006ca1c3 in ha_commit_trans (thd=thd@entry=0x27178d0, all=all@entry=false) at /media/Tintin/Work/code/codership/5.5.30/sql/handler.cc:1327
#13 0x0000000000661c60 in trans_commit_stmt (thd=0x27178d0) at /media/Tintin/Work/code/codership/5.5.30/sql/transaction.cc:380
#14 0x0000000000584f46 in select_create::send_eof (this=0x7fa130005ae8) at /media/Tintin/Work/code/codership/5.5.30/sql/sql_insert.cc:4068
#15 0x00000000005d5c76 in do_select (join=join@entry=0x7fa130053f60, fields=fields@entry=0x2719be0, table=table@entry=0x0, procedure=0x0) at /media/Tintin/Work/code/codership/5.5.30/sql/sql_select.cc:11604
#16 0x00000000005e9481 in JOIN::exec (this=this@entry=0x7fa130053f60) at /media/Tintin/Work/code/codership/5.5.30/sql/sql_select.cc:2385
#17 0x00000000005e36ed in mysql_select (thd=thd@entry=0x27178d0, rref_pointer_array=rref_pointer_array@entry=0x2719cb8, tables=0x7fa1300053f0, wild_num=1, fields=..., conds=<optimized out>, og_num=0, order=0x0,
    group=0x0, having=0x0, proc_param=0x0, select_options=select_options@entry=2416184064, result=result@entry=0x7fa130005ae8, unit=unit@entry=0x27194b0, select_lex=select_lex@entry=0x2719ad0)
    at /media/Tintin/Work/code/codership/5.5.30/sql/sql_select.cc:2604
#18 0x00000000005e38e4 in handle_select (thd=thd@entry=0x27178d0, lex=lex@entry=0x2719400, result=result@entry=0x7fa130005ae8, setup_tables_done_option=setup_tables_done_option@entry=0)
    at /media/Tintin/Work/code/codership/5.5.30/sql/sql_select.cc:297
#19 0x000000000059f451 in mysql_execute_comm...

#13 is with codership 5.5.30 binaries.

Also, noticed this in another run:

131029 12:00:11 [Warning] WSREP: TO isolation failed for: 3, sql: DROP TABLE IF EXISTS m. Check wsrep connection state and retry the query.
mysqld: /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/sql_error.cc:422: void Diagnostics_area::set_error_status(THD*, uint, const char*, const char*): Assertion `! is_set() || can_overwrite_status' failed.
06:30:11 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=20
max_threads=153
thread_count=19
connection_count=19
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 343067 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x2044e80
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
mysqld: /media/Tintin/Work/code/percona-xtradb-cluster/trunk/Percona-Server/sql/wsrep_mysqld.cc:1251: int wsrep_to_isolation_begin(THD*, char*, char*, const TABLE_LIST*): Assertion `thd->wsrep_trx_seqno == (-1)' failed.

Download full text (6.6 KiB)

Got another crash with similar backtrace:

# 2013-12-11T16:02:19 [10181] #3 0x0000003856a32925 in raise () from /lib64/libc.so.6
# 2013-12-11T16:02:19 [10181] #4 0x0000003856a34105 in abort () from /lib64/libc.so.6
# 2013-12-11T16:02:19 [10181] #5 0x0000003856a2ba4e in __assert_fail_base () from /lib64/libc.so.6
# 2013-12-11T16:02:19 [10181] #6 0x0000003856a2bb10 in __assert_fail () from /lib64/libc.so.6
# 2013-12-11T16:02:19 [10181] #7 0x0000000000714241 in Diagnostics_area::set_error_status (this=0x2b1ce18, sql_errno=1213, message=0x7f8b40085fa0 "Deadlock found when trying to get lock; try restarting transaction", sqlstate=0xd48ed8 "40001", error_condition=0x0) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/sql_error.cc:521
# 2013-12-11T16:02:19 [10181] #8 0x00000000006fec61 in THD::raise_condition (this=0x2b19510, sql_errno=1213, sqlstate=0xd48ed8 "40001", level=<value optimized out>, msg=0x7f8b40085fa0 "Deadlock found when trying to get lock; try restarting transaction") at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/sql_class.cc:1574
# 2013-12-11T16:02:19 [10181] #9 0x00000000005a7246 in my_message_sql (error=1213, str=<value optimized out>, MyFlags=0) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/mysqld.cc:3491
# 2013-12-11T16:02:19 [10181] #10 0x00000000009ad59f in my_error (nr=1213, MyFlags=0) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/mysys/my_error.c:185
# 2013-12-11T16:02:19 [10181] #11 0x00000000005d3411 in ha_prepare_low (thd=0x2b19510, all=<value optimized out>) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/handler.cc:2188
# 2013-12-11T16:02:19 [10181] #12 0x00000000005d6659 in ha_commit_trans (thd=0x2b19510, all=false, ignore_global_read_lock=false) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/handler.cc:1517
# 2013-12-11T16:02:19 [10181] #13 0x00000000007f62b1 in trans_commit_stmt (thd=0x2b19510) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/transaction.cc:473
# 2013-12-11T16:02:19 [10181] #14 0x0000000000729340 in select_create::send_eof (this=0x7f8ad40ed3e8) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/sql_insert.cc:4414
# 2013-12-11T16:02:19 [10181] #15 0x000000000071d175 in do_select (this=0x7f8ad40ed568) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/sql_executor.cc:980
# 2013-12-11T16:02:19 [10181] #16 JOIN::exec (this=0x7f8ad40ed568) at /mnt/workspace/build-xtradb-cluster-binaries-56/BUILD_TYPE/debug/label_exp/centos6-64/Percona-XtraDB-Cluster-5.6.15/sql/sql_executor.cc:191
# 2013-12-11T16:02:19 [10181] #17 0x000000000076da20 in mysql_...

Read more...

Do you have a reproducible TC. Also, please re-try it with latest 5.6

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-973

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers