Sporadic partial-hangup on various queries + related (same-testcase) crashes/asserts

Bug #1371827 reported by Roel Van de Paar
20
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Percona Server moved to https://jira.percona.com/projects/PS
Fix Released
Medium
Alexey Kopytov
5.1
Invalid
Undecided
Unassigned
5.5
Invalid
Undecided
Unassigned
5.6
Fix Released
Medium
Alexey Kopytov

Bug Description

Symptoms:
- Looks to be sporadic (single thread SQL replay produces issue 3 out of 10 replays)
- Issue seems connected with mysql.slave_relay_log_info or SHOW CREATE TABLE mysql.slave_relay_log_info. All 3 (out of 10) SQL replays showed the "partial hang" on the same line
- Issue is single-sql-thread reproducible
- No special mysqld settings used (i.e. likely no connection with treadpool)
- CLI client remains able to connect. SHOW PROCESSLIST gives;
  (cleaned up the processlist formatting a bit)

mysql> show processlist;
--------------------------------------------------------------------------
Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined
--------------------------------------------------------------------------
| 6 | root | localhost | information_schema | Query | 711 | Opening tables | SHOW CREATE TABLE mysql.slave_relay_log_info | 0 | 0 |
| 12 | root | localhost | NULL | Query | 0 | System lock | show processlist | 0 | 0 |
--------------------------------------------------------------------------
2 rows in set (0.00 sec)

mysql> show processlist;
--------------------------------------------------------------------------
Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined
--------------------------------------------------------------------------
| 6 | root | localhost | information_schema | Query | 713 | Opening tables | SHOW CREATE TABLE mysql.slave_relay_log_info | 0 | 0 |
| 12 | root | localhost | NULL | Query | 0 | System lock | show processlist | 0 | 0 |
--------------------------------------------------------------------------
2 rows in set (0.00 sec)

- Whilst GDB is active, CLI does not work (as expected, but we had another odd bug recently where one could still CLI connect even when GDB was active (and where broken had broken in/was not continuing to run code)).
- This issue is unrelated to the previously discussed "very high "Time" count in processlist" caused by SET TIMESTAMP (see http://bugs.mysql.com/bug.php?id=73999).

description: updated
description: updated
description: updated
description: updated
summary: - Serious sporadic single-thread non-threadpool single-gdb-tread partial-
- hangup issue in mysqld
+ Serious sporadic single-thread non-threadpool partial-hangup issue in
+ mysqld
description: updated
Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

While reducing testcase found that server is taking too much time on Closing tables/opening tables states.

mysql> SHOW FULL PROCESSLIST;
+----+------+-----------+------+---------+------+----------------+-----------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+----------------+-----------------------+-----------+---------------+
| 5 | root | localhost | test | Query | 532 | closing tables | DROP VIEW IF EXISTS d | 0 | 0 |
| 8 | root | localhost | test | Query | 0 | init | SHOW FULL PROCESSLIST | 0 | 0 |
+----+------+-----------+------+---------+------+----------------+-----------------------+-----------+---------------+
2 rows in set (0.00 sec)

mysql>

mysql> SHOW FULL PROCESSLIST;
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| 5 | root | localhost | test | Query | 1119 | Opening tables | REPLACE INTO `table500_tokudb_default_int` ( `c7` ) VALUES
( 6792960 ) | 0 | 0 |
| 8 | root | localhost | test | Query | 0 | init | SHOW FULL PROCESSLIST | 0 | 0 |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
2 rows in set (0.00 sec)

mysql>

Revision history for this message
Roel Van de Paar (roel11) wrote :

These is a testcase in mail 'as per IRC', and Ramesh is also creating a similar testcase in a more structured format.

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

Testcase:

While reducing server crash using reducer script, found that the reducer thread is hanging in "Opening tables" state in all instances. This stops testcase reducer to go forward.

This issue is happening with "REPLACE INTO ..." SQL statement.

Attached reducer script and sql files for reproducing the issue.

How to reproduce the issue:
        1) Download attached reducer script and sql file
        2) modify following parameters in reducer script as per your configuration
              MYBASE="/ssd/qa56opt/Percona-Server-5.6.21-rel69.0-670.Linux.x86_64"
              INPUTFILE="105b.sql"
              TOKUDBSQL="/ssd/qa56opt/randgen/conf/percona_qa/5.6/TokuDB.sql"
        3) Run reducer script
        4) Use status.sh script (lp:rangen/util/reducer/status.sh) to analyze the reducer status.
                You will get client connection info from "=== Client version strings for easy access" to check the processlist

FYI
-bash-4.1$ for i in {1..10}; do /sdd/test/Percona-Server-5.6.21-rel69.0-670.Linux.x86_64/bin/mysql --socket=/dev/shm/1412828341/subreducer/$i/socket.sock -uroot -e "SHOW FULL PROCESSLIST";done
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| 5 | root | localhost | test | Query | 1462 | Opening tables | REPLACE INTO `table500_tokudb_default_int` ( `c7` ) VALUES
( 6792960 ) | 0 | 0 |
| 26 | root | localhost | NULL | Query | 0 | init | SHOW FULL PROCESSLIST | 0 | 0 |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
[.....]
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| 5 | root | localhost | test | Query | 1459 | Opening tables | REPLACE INTO `table500_tokudb_default_int` ( `c7` ) VALUES
( 6792960 ) | 0 | 0 |
| 22 | root | localhost | NULL | Query | 0 | init | SHOW FULL PROCESSLIST | 0 | 0 |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
-bash-4.1$

Revision history for this message
Roel Van de Paar (roel11) wrote :

Laurynas, let us know if you need any help with the testcases / reproducing.

Revision history for this message
Roel Van de Paar (roel11) wrote :

Marking as qablock as reducer gets halted

tags: added: qablock
Revision history for this message
Roel Van de Paar (roel11) wrote :

This bug has a testcase

Revision history for this message
Roel Van de Paar (roel11) wrote :

IMHO, this bug is critical.

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Can you please upload a new core dump? (The one in comment #6 is missing some libs, thus doesn't show all the thread stacks in gdb).

Revision history for this message
Roel Van de Paar (roel11) wrote :

Ramesh, do you still have the testcase live for generating a new core? Also, check if ldd provides all files (inc lib64).

Laurynas - was it simply one of the ldd copied files missing or was there smoething wrong with the core?

Revision history for this message
Roel Van de Paar (roel11) wrote :

Setting back to confirmed; this bug has a good testcase in #9

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

The core was missing some libs, some thread stacks did not load. And the deadlock is inside of glibc, thus it's important.

Revision history for this message
Roel Van de Paar (roel11) wrote :

Inside glibc. Interesting. I saw a glibc crash flash by another day. Will see if I can find some details.

Revision history for this message
Roel Van de Paar (roel11) wrote :

Apparently I saw even two of which I saved the details. Laurynas, please check attachments if you can see any correlation. Look like different issues. If any springs out, I can try and reproduce.

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

attached new core and ldd files

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Ramesh, please upload your mysqld binary too

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

PFA mysqld

summary: - Serious sporadic single-thread non-threadpool partial-hangup issue in
- mysqld
+ Sporadic partial-hangup
description: updated
Revision history for this message
Roel Van de Paar (roel11) wrote : Re: Sporadic partial-hangup

Re-confirmed present in Percona-Server-5.6.21-rel70.0-693.Linux.x86_64-debug

[roel@localhost test]$ for i in {1..25}; do /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64-debug/bin/mysql --socket=/dev/shm/1416639569/subreducer/$i/socket.sock -uroot -e "SHOW FULL PROCESSLIST\GSHOW GLOBAL VARIABLES LIKE 'pid_file'";done
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/dev/shm/1416639569/subreducer/1/socket.sock' (111)
[...]
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/dev/shm/1416639569/subreducer/15/socket.sock' (2)
*************************** 1. row ***************************
           Id: 1
         User: event_scheduler
         Host: localhost
           db: NULL
      Command: Daemon
         Time: 177
        State: Waiting on empty queue
         Info: NULL
    Rows_sent: 0
Rows_examined: 0
*************************** 2. row ***************************
           Id: 5
         User: root
         Host: localhost
           db: mysql
      Command: Query
         Time: 176
        State: System lock
         Info: CREATE EVENT querytimeout ON SCHEDULE EVERY 20 SECOND DO BEGIN
    SET @id:='';
    SET @id:=(SELECT id FROM INFORMATION_SCHEMA.PROCESSLIST WHERE ID<>CONNECTION_ID() AND STATE<>'killed' AND TIME>90 ORDER BY TIME DESC LIMIT 1);
    IF @id > 1 THEN KILL QUERY @id; END IF;
    END
    Rows_sent: 0
Rows_examined: 0
*************************** 3. row ***************************
           Id: 33
         User: root
         Host: localhost
           db: NULL
      Command: Query
         Time: 0
        State: init
         Info: SHOW FULL PROCESSLIST
    Rows_sent: 0
Rows_examined: 0
+---------------+-------------------------------------------+
| Variable_name | Value |
+---------------+-------------------------------------------+
| pid_file | /dev/shm/1416639569/subreducer/16/pid.pid |
+---------------+-------------------------------------------+
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/dev/shm/1416639569/subreducer/17/socket.sock' (2)
[...]

Revision history for this message
Roel Van de Paar (roel11) wrote :

for i in {1..25}; do /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64-debug/bin/mysql --socket=/dev/shm/1416639569/subreducer/$i/socket.sock -uroot -e "SHOW FULL PROCESSLIST\GSHOW GLOBAL VARIABLES LIKE 'pid_file'";done

Is an easier way to see when things hang. While all threads are going (I use 25), it's easier to run this version (I only have one thread hanging now, the others are finished);

$ for i in {1..25}; do /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64-debug/bin/mysql --socket=/dev/shm/1416639569/subreducer/$i/socket.sock -uroot -e "SHOW FULL PROCESSLIST\GSHOW GLOBAL VARIABLES LIKE 'pid_file'";done 2>&1 | egrep "Time: |pid_file"
         Time: 314
         Time: 313
         Time: 0
pid_file /dev/shm/1416639569/subreducer/16/pid.pid

Revision history for this message
Roel Van de Paar (roel11) wrote :

I am uploading a new version of Ramesh' testcsase. This one uses 25 threads instead of 10 to start with. It also has more of the faulting statements so far in the SQL file, and the SQL file is much longer. All this would lead to more easy reproducibility I'd say, though a full restart of reducer.sh may still be necessary if not seen in 25 or 30 maybe even 35 threads (it increases by 5 now). I would not go past 35 as this may causes system instability. And you already need a good machine to run 35 to start with. Issue is confirmed reproducible on debug, and remains present in latest release.

Revision history for this message
Roel Van de Paar (roel11) wrote :

See mail for further details. Also note that last testcase has names "105c.sql" and "reducer105c.sh" to seperate them from the previous testcase. So, start reducer105c.sh, wait till the 25 threads are started (pids shown), then wait a few more minutes (takes a while to startup), then use second 1..25 example from #28 and/or status.sh to monitor. Once you see the issue crop up, hop in with cli or gdb etc.

Revision history for this message
Roel Van de Paar (roel11) wrote :

One challenge is that very often mysqld crashes with this crash;

2014-11-24 07:55:05 3237 [Note] InnoDB: Resuming purgemysqld: /mnt/workspace/percona-server-5.6-binaries-debug-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/sql/lock.cc:1142: void Global_backup_lock::release(THD*): Assertion `m_lock != __null && thd->mdl_context.is_lock_owner(m_namespace, "", "", MDL_SHARED)' failed.
20:55:05 UTC - mysqld got signal 6 ;

Which stops going through the testcase short. It may help to put a patch for this bug into the code first.

I have added another testcase in that bug @ comment #3 which is reduced from this testcase.

Revision history for this message
Roel Van de Paar (roel11) wrote :

It's a bit messy. Removing all "FOR BACKUP" statements from the testcase, the issue no longer reproduces, but it is slightly hard to reproduce to start with. My current thought is that the FOR BACKUP statements may be related/may be the cause. This combined with having seen bug 1377093 plenty of times now (first pquery, now here), my current view is that it may be best to fix bug 1377093 first and then re-test this bug.

From another angle, so far I was unable to reproduce the issue without tokudb (using Ramesh' testcase). This is not to say it is definitely related to TokuDB code. Also, it can not be reproduced on upstream, so it is likely a Percona issue.

Revision history for this message
Roel Van de Paar (roel11) wrote :

Ok. Mostly ingnore #32. I was able to reproduce it even with FOR BACKUP removed.

mysql> show processlist;
+----+-----------------+-----------+------+---------+------+-----------------------------+-----------------------------------------------------------------
| Id | User | Host | db | Command | Time | State | Info
                                | Rows_sent | Rows_examined |
+----+-----------------+-----------+------+---------+------+-----------------------------+-----------------------------------------------------------------
| 1 | event_scheduler | localhost | NULL | Daemon | 11 | Waiting for next activation | NULL
                                | 0 | 0 |
| 6 | root | localhost | test | Query | 648 | creating table | CREATE TABLE `table500_tokudb_default_key_pk_parts_2_int_autoinc` (
`c17` text CHARACTER SET latin1, | 0 | 0 |
| 45 | root | localhost | NULL | Query | 0 | init | show processlist
                                | 0 | 0 |
+----+-----------------+-----------+------+---------+------+-----------------------------+-----------------------------------------------------------------

Leave reducer running. Once it reproduces it will be clear. Either # of queries will be significantly less for one thread (when using ./status.sh) and no progressing or eventually only one thread will be left;

=== Verify stages progress per mysqld, only relevant for initial simplification during the verify stage ([V])
Verify attempt #1
=== Queries processed per mysqld
Queries 5669/22781
=== Client version strings for easy access
/sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64-debug/bin/mysql --socket=/dev/shm/1416780157/subreducer/20/socket.sock -uroot

This was after it had progressed from 25 to 30 threads. Recommendation; start it, check it every half hour/hour orso for a stalled mysqld.

Revision history for this message
Roel Van de Paar (roel11) wrote :

#33 Was with tokudb plugin loaded, but with --init tokudb sql file disabled.

Revision history for this message
Roel Van de Paar (roel11) wrote :

$ for i in {1..25}; do /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64-debug/bin/mysql --socket=/dev/shm/1416786018/subreducer/$i/socket.sock -uroot -e "SHOW FULL PROCESSLIST\GSHOW GLOBAL VARIABLES LIKE 'pid_file'";done 2>&1 | egrep "Time: |pid_file" | grep -v "Time: 0$"

Handier still. Change 1416... to running reducer /dev/shm dir.

Revision history for this message
Roel Van de Paar (roel11) wrote :

In spite of numerous attempts, I am not able to reproduce with TokuDB completely removed.

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

I could not reproduce the issue with debug build after removing tokudb plugin.

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

Some of the findings with optimized build

i) Reproducible with opt build + without tokudb plugin and I_S tokudb tables

mysql> show processlist;
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
| 4 | root | localhost | test | Query | 458 | Opening tables | REPLACE INTO `table500_tokudb_default_int` ( `c7` ) VALUES
( 6792960 ) | 0 | 0 |
| 6 | root | localhost | test | Query | 0 | init | show processlist | 0 | 0 |
+----+------+-----------+------+---------+------+----------------+-------------------------------------------------------------------------+-----------+---------------+
2 rows in set (0.01 sec)

mysql> show tables like 'table500_tokudb_default_int';
Empty set (0.00 sec)

mysql> select database();
+------------+
| database() |
+------------+
| test |
+------------+
1 row in set (0.00 sec)

mysql>

In above testcase table "table500_tokudb_default_int" is not present in test database, but in show processlist "REPLACE INTO `table500_tokudb_default_int`..." is in hung state.

ii) When I removed "LOCK BINLOG FOR BACKUP" statement from SQL file, it became non-reproducible with optimized build ( Tested with and without tokudb plugin)

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

OK, so
1) TokuDB seems to be required for the bug to reproduce;
2) Can you please reach consensus between comments #32 and #38 on whether backup locks are required for the bug to reproduce?

Revision history for this message
Roel Van de Paar (roel11) wrote :

1) No, Ramesh was able to reproduce without TokuDB in #38
2) #33: No backup locks required

IOW, This issue can be reproduced on both opt and debug builds, without TokuDB, without backup locks.

Revision history for this message
Roel Van de Paar (roel11) wrote :

The challenge is however, that each time the setup is changed, the issue reproducibility changes. This makes for "I did this and it was no longer reproducible" syndrome. IOW, we will have to work with what we have - which may be either backup locks or opt or tokudb enabled at any time.

Revision history for this message
Roel Van de Paar (roel11) wrote :

The issue has always seem more reproducible in Ramesh's setup then in mine. I can reproduce it, but it takes a fair number of tries. I am discussing with Ramesh to grant access for Laurynas to the server.

Revision history for this message
Roel Van de Paar (roel11) wrote :

The bug happens on fair number of other queries too!

| 1 | event_scheduler | localhost | NULL | Daemon | 237 | Waiting on empty queue | NULL | 0 | 0 |
| 5 | root | localhost | test | Query | 237 | checking permissions | TRUNCATE `table500_tokudb_default_int` | 0 | 0 |

| 352 | root | localhost | test | Query | 142 | closing tables | SHOW WARNINGS | 0 | 0 |

Revision history for this message
Roel Van de Paar (roel11) wrote :

| 5 | root | localhost | test | Query | 14 | Opening tables | REPLACE INTO t500_tokudb_default_int (c7) VALUES (1) | 0 |

Revision history for this message
Roel Van de Paar (roel11) wrote :

After several hours of hacking reducer.sh to use SOURCE (thanks Ramesh!) and reducer-script-handholding, we finally have a good/short testcase for this bug!

===============
DROP DATABASE transforms;CREATE DATABASE transforms;DROP DATABASE test;CREATE DATABASE test;USE test;
CREATE TABLE `t100_innodb_tokudb_small` (
`c17` text CHARACTER SET latin1,
key (`c17` (1))) ENGINE=innodb ROW_FORMAT=tokudb_small;
SET AUTOCOMMIT=OFF;
FLUSH TABLES `t100_innodb_tokudb_small` FOR EXPORT;
LOCK BINLOG FOR BACKUP;
UNLOCK TABLES;
UNLOCK BINLOG;
UPDATE LOW_PRIORITY `t100_innodb_compressed` SET `c9`='2001-08-03 00:00:52.041209' LIMIT 1;
REPLACE INTO `t500_tokudb_default_int` (`c7`) VALUES (1);
===============

Server startup command example:

/sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysqld --no-defaults --basedir=/sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64 --datadir=/dev/shm/1417519395/data --tmpdir=/dev/shm/1417519395/tmp --port=37921 --pid-file=/dev/shm/1417519395/pid.pid --socket=/dev/shm/1417519395/socket.sock --user=roel --log-output=none --sql_mode=ONLY_FULL_GROUP_BY --log-error=/dev/shm/1417519395/error.log.out --event-scheduler=ON

* If you SOURCE it in optimized build, you get a hang (to be confirmed outside reducer. works fine in my hacked reducer)

* If you paste it into an optimized CLI, you get a sig11

  /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysqld(_ZN11MDL_context12release_lockE17enum_mdl_durationP10MDL_ticket+0x32)[0x66a422]
  /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysqld(_ZN11MDL_context27release_locks_stored_beforeE17enum_mdl_durationP10MDL_ticket+0x35)[0x66a485]
  /sda/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysqld(_ZN11MDL_context21rollback_to_savepointERK13MDL_savepoint+0x17)[0x66a6f7]

* If you paste it into a debug CLI, you get an assert (https://bugs.launchpad.net/percona-server/+bug/1377093)

  mysqld: /mnt/workspace/percona-server-5.6-binaries-debug-yassl/label_exp/centos6-64/percona-server-5.6.21-69.0/sql/lock.cc:1142: void Global_backup_lock::release(THD*): Assertion `m_lock != __null && thd->mdl_context.is_lock_owner(m_namespace, "", "", MDL_SHARED)' failed.

* If you source it in debug, you get another crash (No existing bug report yet it looks like)

  /sda/Percona-Server-5.6.21-rel69.0-687.Linux.x86_64-debug/bin/mysqld(_ZN18Global_backup_lock7releaseEP3THD+0x8e)[0x957aa8]
  /sda/Percona-Server-5.6.21-rel69.0-687.Linux.x86_64-debug/bin/mysqld(_Z21mysql_execute_commandP3THD+0x4399)[0x7e66c8]
  /sda/Percona-Server-5.6.21-rel69.0-687.Linux.x86_64-debug/bin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x5a6)[0x7ed4bf]

* There are other related crashes. May reduce those seperately tomorrow.

As said, this looks serious.

summary: - Sporadic partial-hangup
+ Sporadic partial-hangup on various queries + related (same-testcase)
+ crashes/asserts
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

Reduced testcase

CREATE TABLE t (a int) ENGINE=innodb;
FLUSH TABLES t FOR EXPORT;
LOCK BINLOG FOR BACKUP;
UNLOCK TABLES;
UNLOCK BINLOG;

tags: added: backup-locks
Revision history for this message
Roel Van de Paar (roel11) wrote :

This issue is not restricted to backup logs if my testing in #33 was valid. I remember checking twice back then.

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :
Download full text (4.1 KiB)

Got another assertion when we redirect below testcase three times in to the server

DROP DATABASE test;CREATE DATABASE test;USE test;
create TABLE t0(i int,j int)ENGINE=innodb;
FLUSH TABLES t0 FOR EXPORT;
LOCK BINLOG FOR BACKUP;

** Testcase redirection command

/ssd/ramesh/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysql -A -uroot -S/ssd/ramesh/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/socket.sock test < testcase.sql

*** GDB info

************ GDB
+bt
#0 0x00007f4ea139a771 in pthread_kill () from /lib64/libpthread.so.0
#1 0x000000000067679d in handle_fatal_signal (sig=11) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/sql/signal_handler.cc:236
#2 <signal handler called>
#3 operator() (this=<synthetic pointer>, thr=...) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/extra/yassl/src/yassl_int.cpp:1722
#4 find_if<mySTL::list<yaSSL::ThreadError>::iterator, yaSSL::yassl_int_cpp_local2::thr_match> (pred=..., last=..., first=...) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/extra/yassl/taocrypt/mySTL/algorithm.hpp:68
#5 yaSSL::Errors::Remove (this=0x7f4de9022040) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/extra/yassl/src/yassl_int.cpp:1791
#6 0x00000000005ad824 in one_thread_per_connection_end (thd=0x7f4e04f84000, block_pthread=true) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/sql/mysqld.cc:2843
#7 0x00000000006cb346 in do_handle_one_connection (thd_arg=thd_arg@entry=0x7f4e10f23000) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/sql/sql_connect.cc:1546
#8 0x00000000006cb4b0 in handle_one_connection (arg=arg@entry=0x7f4e10f23000) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/sql/sql_connect.cc:1443
#9 0x0000000000af86d3 in pfs_spawn_thread (arg=0x7f4e10f393e0) at /mnt/workspace/percona-server-5.6-binaries-opt-yassl/label_exp/centos6-64/percona-server-5.6.21-70.0/storage/perfschema/pfs.cc:1860
#10 0x00007f4ea1395df3 in start_thread () from /lib64/libpthread.so.0
#11 0x00007f4ea005f01d in clone () from /lib64/libc.so.6

************* Error

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=1
max_threads=153
thread_count=1
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 69184 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f03a9f92000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f0446c30e40 thread_stack 0x40000
/ssd/ramesh/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysqld(my_print_stacktrace+0x2c)[0x8e9f2c]
/ssd/ramesh/Percona-Server-5.6.21-rel70.0-693.Linux.x86_64/bin/mysqld(handle_fatal_signal+0x461)[0x676831]
/lib64/libpthread.s...

Read more...

Revision history for this message
Ramesh Sivaraman (rameshvs02) wrote :

re : comment #48 - had discussion with Laurynas, looks like same bug.

Revision history for this message
Alexey Kopytov (akopytov) wrote :
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PS-1540

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.