InnoDB: Failing assertion: sym_node->table != NULL

Bug #1729536 reported by Olaf van der Spek
24
This bug affects 3 people
Affects Status Importance Assigned to Milestone
MySQL Server
Unknown
Unknown
mysql-5.7 (Ubuntu)
Triaged
Undecided
Unassigned

Bug Description

V: 5.7.20-0ubuntu0.16.04.1

I think it was when I added a field to a table (via phpMyAdmin).

2017-11-02 09:36:47 0x7fdc38ff9700 InnoDB: Assertion failure in thread 140583825807104 in file pars0pars.cc line 822
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
08:36:47 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=40
max_threads=151
thread_count=8
connection_count=8
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 76385 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xe8a93b]
/usr/sbin/mysqld(handle_fatal_signal+0x489)[0x786749]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fdc5b70f390]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7fdc5aac8428]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7fdc5aaca02a]
/usr/sbin/mysqld[0x75c31a]
/usr/sbin/mysqld(_Z21pars_insert_statementP10sym_node_tPvP10sel_node_t+0x3a8)[0x1024088]
/usr/sbin/mysqld(_Z7yyparsev+0x1227)[0x12455d7]
/usr/sbin/mysqld(_Z8pars_sqlP11pars_info_tPKc+0x9e)[0x10258de]
/usr/sbin/mysqld(_Z13fts_parse_sqlP11fts_table_tP11pars_info_tPKc+0x190)[0x12251e0]
/usr/sbin/mysqld(_Z14fts_write_nodeP5trx_tPP10que_fork_tP11fts_table_tP12fts_string_tP10fts_node_t+0x292)[0x11ffa52]
/usr/sbin/mysqld[0x12031d8]
/usr/sbin/mysqld(_Z14fts_sync_tableP12dict_table_tbbb+0x329)[0x1209219]
/usr/sbin/mysqld(_Z23fts_optimize_sync_tablem+0x42)[0x1210b62]
/usr/sbin/mysqld(_Z19fts_optimize_threadPv+0x57c)[0x121a0bc]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7fdc5b7056ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fdc5ab9a3dd]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
2017-11-02T08:36:47.822416Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi Olaf,
there seem to be some related known issues in percona/mysql, see bug 1399471 for more details and follow referenced further bugs from there.

I'll explicitly subscribe Lars to take a look, but as printed in the error message it would be great if you could file (and mention the number here) a bug at [1] the upstream bug tracker for mysql. They might have a backportable fix known already.

You'd have to check if [2] or [3] are your case already.

In general many of the issues seem related to e.g. migrating from percona - does your setup have any percona history that could be related?

[1]: http://bugs.mysql.com
[2]: https://bugs.mysql.com/bug.php?id=69274
[3]: https://bugs.mysql.com/bug.php?id=68987

Changed in mysql-5.7 (Ubuntu):
status: New → Incomplete
Revision history for this message
Olaf van der Spek (olafvdspek) wrote : Re: [Bug 1729536] Re: InnoDB: Failing assertion: sym_node->table != NULL

2017-11-03 10:55 GMT+01:00 ChristianEhrhardt <email address hidden>:
> Hi Olaf,
> there seem to be some related known issues in percona/mysql, see bug 1399471 for more details and follow referenced further bugs from there.
>
> I'll explicitly subscribe Lars to take a look, but as printed in the
> error message it would be great if you could file (and mention the
> number here) a bug at [1] the upstream bug tracker for mysql. They might
> have a backportable fix known already.

Since upstream requires Oracle accounts I don't file bugs there anymore..
Note that we're running 5.7.20, I'm not sure why any backporting would
be required.

> You'd have to check if [2] or [3] are your case already.

[2] doesn't seem applicable as our crash doesn't happen consistently at startup.

Not sure about [3]

> In general many of the issues seem related to e.g. migrating from
> percona - does your setup have any percona history that could be
> related?

I didn't setup the server but I don't think so.

I think it was first installed with Ubuntu 16.04.. didn't that include
MySQL 5.7 already? If so, I don't think there were any upgrades.

--
Olaf

Revision history for this message
Eric Fjøsne (efj) wrote :
Download full text (6.4 KiB)

Hi Olaf,

You are not alone ...

This bug did not occur on our systems when we were using version 5.7.19, but since we upgraded to 5.7.20, we get this on a regular basis, but still are unable to locate the exact cause.

I thought it was related specifically to replication, but according to your message, it seems it can happen for some other reason as well ?

I already reported this bug on the mysql bug reporting space a few days ago: https://bugs.mysql.com/bug.php?id=88653

Content of the bug report:

We are experiencing repeated and random crashes on a replication slave of multiple masters.

This server is replicating several other MySQL servers via multiple channels (4 to be exact). What we can observe is that at some point the server crashes, then restarts and launches innodb recovery processes. After this, we have again a running MySQL server, but the master positioning for all channels is rolled back to some point in the past ... but not the data. It is necessary to skip the records one by one for each replication channel, until we reach the point when the crash happened, so we can resume the replication.

Here is a dump of the relevant part of the file /var/log/mysql/error.log:

==========================================
BACKTRACE BEGIN
==========================================
2017-11-24T23:00:35.795018Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6825ms. The settings might not be optimal. (flushed=3 and evicted=0, during the time.)
2017-11-25 05:32:25 0x7fa6ecbbc700 InnoDB: Assertion failure in thread 140354913027840 in file pars0pars.cc line 822
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
04:32:25 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=8
max_threads=214
thread_count=10
connection_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 101418 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xe8a93b]
/usr/sbin/mysqld(handle_fatal_signal+0x489)[0x786749]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fce...

Read more...

Revision history for this message
Eric Fjøsne (efj) wrote :

Hi ChristianEhrhardt,

I went through the bug reports you mention, but this does not seem to apply to what we are experiencing.

We also tried many approaches thinking there was some problem with the data at first. The only thing we did not try yet is to recreate the ibdata/ib_log files because it seems like an overkill. Especially knowing the fact that we double checked the data and that, after a restart of the mysql service and fixing the replication status, queries actually seem to work and not make the server crash ...

I believe this is somehow contextual, but I am at loss of ideas as to how to approach this.

What I do know though is that it has to be somehow reproducible because the timeframe within which it happens is the same every day.

Thanks in advance for any help, it would be more than welcome.

On a side note, if there exists a way to elegantly rollback to version 5.7.19, I would be more than happy to hear about it too.

Best regards,

Eric

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

They seemed similar to my limited insight to the matter, thanks for cross checking the bugs Eric!

Also linking up the upstream bug you opened.

tags: added: regression-update
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Also as it seems to be confirmed by more than one reporter as regression part of the 5.7.20-0ubuntu0.16.04.1 update I'll tag it as such and add Marc who did the security update.
Maybe in the context of the CVEs there were discussions that could indicate what is failing.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

@Eric for the temporary downgrade you can force versions in apt (but this will stop upgrades, so ensure to unlock it later on with the same command without a version qualifier). Also you can usually only go to versions that are the latest in one of the pockets (release/updates/...).
So atm that would be 5.7.11-0ubuntu6 which is way too old I'd think.
$ apt install mysql-server-5.7=5.7.11-0ubuntu6
You also have to do the same with each dependency (to downgrade all of them)

If you have nodes that did not yet upgrade you can hold the current version via:
$ apt-mark hold <package-name>

Finally there are all versions still available, just not through apt (which is mostly meant to keep you up to date, not to go back). So for each version there is a launchpad page e.g. for the last mysql version there is [1].
You can go to your architecture on the right and then fetch the .deb files as needed.
Once you have all .debs you want to downgrade you can do so with:
$ dpkg -i <list of .debs>

Of course this isn't really recommended as this was a security update, but as a workaround you might consider it.
It might help if you state on the upstream bug that it was the stable update 5.7.19->5.7.20 that (seems to have) triggered this.

[1]: https://launchpad.net/ubuntu/+source/mysql-5.7/5.7.19-0ubuntu0.16.04.1

Revision history for this message
Olaf van der Spek (olafvdspek) wrote :

Hi,

> I thought it was related specifically to replication, but according to your message, it seems it can happen for some other reason as well ?

Yes, we're not using replication at all.
I think the assert is a low-level one. Something before, probably unrelated to this stacktrace, goes wrong and this is only noticed later.

> What I do know though is that it has to be somehow reproducible because the timeframe within which it happens is the same every day.

That's interesting. What makes this timeframe different from other timeframes? Are you altering tables?

Gr,

Olaf

Revision history for this message
Eric Fjøsne (efj) wrote :

@ChristianEhrhardt
Thanks for your replies and initiatives. Much appreciated.
I did indeed notice the install package we could rollback to using apt-get install mysql-server=XXX was 5.7.11, but this version is more than a year and a half old, which doesn't sound like such a good idea.

Coming back to our use case: the replication master is still running version 5.7.19 and is working flawlessly (fortunately ...), only the replication slave is running version 5.7.20. Without proof of any kind so far, but based on Olaf's message, this would suggest that regression was indeed linked to the update from 5.7.19 to 5.7.20.

As silly as it may sound, the crash is now "stabilised" on our infrastructure and handled on a daily basis using the same recovery method: fast forward the replication (basically replay the statements that weren't rolled back at the moment of crash) until there is no more replication error. I believe we will keep on doing this until a fix is published.

@Olaf:
>> What I do know though is that it has to be somehow reproducible because the timeframe within which it happens is the same every day.
> That's interesting. What makes this timeframe different from other timeframes? Are you altering tables?

It is the only time of the day (pre-production activity) when we are running DDL queries (OPTIMIZE / TRUNCATE). To my knowledge of InnoDB, OPTIMIZE is not supported as such by InnoDB, and actually performs a recreate of the whole table from scratch. TRUNCATE will also perform a full table recreate.

As for the rest of the day (actual production), we have only DML queries running on the servers, except for the upgrades we manually perform.

To move forward and get some concrete information, I will go through all of our batches and:
- disable exceptionally all the OPTIMIZE queries
- replace the TRUNCATE statements by DELETE FROM statements.

I will post some feedback here tomorrow morning.

Thanks again for your replies,

Eric

Revision history for this message
Olaf van der Spek (olafvdspek) wrote :

2017-11-29 10:46 GMT+01:00 Eric Fjøsne <email address hidden>:
> @ChristianEhrhardt
> Thanks for your replies and initiatives. Much appreciated.
> I did indeed notice the install package we could rollback to using apt-get install mysql-server=XXX was 5.7.11, but this version is more than a year and a half old, which doesn't sound like such a good idea.

5.7.19 might still be available in your local apt cache: /var/cache/apt/archives

--
Olaf

Revision history for this message
Eric Fjøsne (efj) wrote :

@Olaf: it wasn't available anymore unfortunately.

After deactivating the OPTIMIZE statements and changing the TRUNCATE statement to DELETE FROM in our pre-production scripts, I can happily confirm there was no outage on our server during this night.

If this can help investigating this bug in any way?

Thanks in advance,

Eric

Revision history for this message
Olaf van der Spek (olafvdspek) wrote :

2017-11-30 9:12 GMT+01:00 Eric Fjøsne <email address hidden>:
> @Olaf: it wasn't available anymore unfortunately.
>
> After deactivating the OPTIMIZE statements and changing the TRUNCATE
> statement to DELETE FROM in our pre-production scripts, I can happily
> confirm there was no outage on our server during this night.
>
> If this can help investigating this bug in any way?

My guess:
1. Test with just the optimize enabled.
2. Test with just the truncate enabled.

3. If the crashes return, try to narrow it down to specific queries / tables.

--
Olaf

Revision history for this message
Eric Fjøsne (efj) wrote :

I reactivated one set of optimize queries for this night and it crashed with the exact same backtrace. So I can confirm this is indeed related to this specific set. However, there is nothing fancy about them at all.

It is a simple SQL script with 9 OPTIMIZE statements being run from the mysql client (CLI) on a single database.
OPTIMIZE TABLE Table1;
OPTIMIZE TABLE Table2;
OPTIMIZE TABLE Table3;
...

When executing the queries manually, one by one, all work.
When executing the queries from an sql script via the mysql client (CLI), all works.

I would tend to believe that it must be linked to some execution context. The only thing I can think of is concurrent queries being run at the same time ... but again, in our case, the crash occurs on a slave server running queries from its master only. In this configuration, there should not be anything like a concurrent query I believe ? Only binlog entries being read one by one and applied ?

Revision history for this message
Olaf van der Spek (olafvdspek) wrote :

2017-12-01 10:32 GMT+01:00 Eric Fjøsne <email address hidden>:
> I reactivated one set of optimize queries for this night and it crashed
> with the exact same backtrace. So I can confirm this is indeed related
> to this specific set. However, there is nothing fancy about them at all.

Nice!
It might also be good to know if just the truncates works fine or
crashes as well.

> It is a simple SQL script with 9 OPTIMIZE statements being run from the mysql client (CLI) on a single database.
> OPTIMIZE TABLE Table1;
> OPTIMIZE TABLE Table2;
> OPTIMIZE TABLE Table3;

An idea here would be to first test the first half of the queries,
next day try the second half, to further isolate the culprit.

Gr,

--
Olaf

Revision history for this message
Eric Fjøsne (efj) wrote :

Dears,

I finally finished narrowing it down to the single request causing the crash.

I reactivated the TRUNCATE requests and those are working flawlessly. It is an OPTIMIZE TABLE request on a table of about 70000 entries that causes the mysql service crash. It might be under use simultaneously, but only on the mysql master server, not on the slave where the crash occurs.

This OPTIMIZE TABLE gets executed on the master server, then replicated on the slave server where it causes a mysql service crash, with the backtrace mentioned a few days ago.

What can I do next to further investigate this in order to provide feedback over here in an efficient way ?

Thanks in advance for the help,

Eric

Revision history for this message
Olaf van der Spek (olafvdspek) wrote :

2017-12-06 9:15 GMT+01:00 Eric Fjøsne <email address hidden>:
> What can I do next to further investigate this in order to provide
> feedback over here in an efficient way ?

Thinking aloud:
1. Share table structure
2. Copy table to another name, add optimize table for the copy and see
if it still crashes.
3. If yes, try to isolate the crashes to specific columns by dropping
them one at a time.
4. If no, try to replicate the load on the second table to see if you
can cause the crashes.

--
Olaf

Revision history for this message
Lars Tangvald (lars-tangvald) wrote :

Hi,

Thanks for your work on this!

The ideal for upstream is if you can find a self-contained testcase we
can use to reproduce from scratch, i.e. something along the lines of:

* Set up master-slave replication
* Create a table with [columns] and 70k rows
* Run OPTIMIZE TABLE <and whatever other queries> on the master
* Observe crash (not necessarily 100% of the time)

If you can give some information about the table in question and
configuration of master and slave it might also be of help.

--
Lars

On 06. des. 2017 09:15, Eric Fjøsne wrote:
> Dears,
>
> I finally finished narrowing it down to the single request causing the
> crash.
>
> I reactivated the TRUNCATE requests and those are working flawlessly. It
> is an OPTIMIZE TABLE request on a table of about 70000 entries that
> causes the mysql service crash. It might be under use simultaneously,
> but only on the mysql master server, not on the slave where the crash
> occurs.
>
> This OPTIMIZE TABLE gets executed on the master server, then replicated
> on the slave server where it causes a mysql service crash, with the
> backtrace mentioned a few days ago.
>
> What can I do next to further investigate this in order to provide
> feedback over here in an efficient way ?
>
> Thanks in advance for the help,
>
> Eric
>

Revision history for this message
Eric Fjøsne (efj) wrote :
Download full text (6.3 KiB)

Hi Lars,

Thanks for your reply.

Please note that we experienced a crash again this morning, while all optimise queries were removed from our batches ... so I fear it might be misleading.

I will try to reproduce the crash in a VM.
In the meantime, please find our mysqld configuration file below:

==================================================================
Master configuration
==================================================================

# MySQL 5.7 configuration (2016.07.13 - made by efj)

# For explanations see http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.

[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]
# Basic Settings - do not touch
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
skip-name-resolve

# Limiting maximum amount of connexions
max_connections = 250
max_user_connections = 50

# Security
sql_mode = ''

# Default options for new db/tables
default-storage-engine=InnoDB
character-set-server=utf8
collation-server=utf8_general_ci

# Fine tuning for full text searches - At least 3 characters
ft_min_word_len=2

# Accept incoming connections from all clients
bind-address = 0.0.0.0

# Fine Tuning
key_buffer_size = 16M
max_allowed_packet = 48M
thread_stack = 192K
thread_cache_size = 8

# This replaces the startup script and checks MyISAM tables if needed, the first time they are touched
myisam-recover-options = BACKUP

# Query Cache Configuration
query_cache_limit = 1M
query_cache_size = 128M

# Error log - should be very few entries.
log_error = /var/log/mysql/error.log

# Slow queries
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log

# Logging and Replication
server-id = 7
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 5
max_binlog_size = 100M
log-slave-updates
master_info_repository=TABLE
relay_log_info_repository=TABLE
relay_log=relay-bin.log

# InnoDB server specific configuration
innodb_buffer_pool_size = 150G
innodb_log_file_size = 256M
innodb_log_buffer_size=4M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 8
innodb_file_per_table

==================================================================
Slave configuration
==================================================================

# MySQL 5.7 configuration (2016.07.13 - made by efj)

# For explanations see http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.

[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]
# Basic Settings - do not touch
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = ...

Read more...

Revision history for this message
Eric Fjøsne (efj) wrote :

Apologies for my lack of feedback but those were quite busy days ...

A small update though: the bug I reported on the MySQL bugs has been set as duplicate of the following on the bugs.mysql.com portal:
https://bugs.mysql.com/bug.php?id=88844

Severity is S1: critical.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

The new upstream bug (88844) is unfortunately private and I can't see its state.

Revision history for this message
Eric Fjøsne (efj) wrote :

@Andreas: It wasn't private before ... I don't have access to it anymore either, even when I'm logged in the portal.

We proceeded with the update of MySQL to version 5.7.21 a few days ago. We just reactivated the OPTIMIZE and TRUNCATE query for the run of tomorrow morning.

I will post some feedback here in a week time, or earlier if the issue still is present.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

I was told the mentioned bug is private because it contains a crash dump, and it is still open.

Revision history for this message
Eric Fjøsne (efj) wrote :

Unfortunately, we had big hopes but we experienced again this exact bug, with the exact same trace for version 5.1.21.

+-----------------------------+
| VERSION() |
+-----------------------------+
| 5.7.21-0ubuntu0.16.04.1-log |
+-----------------------------+

Trace found in error.log:

2018-03-16 05:53:38 0x7f8e4a4bf700 InnoDB: Assertion failure in thread 140249108576000 in file pars0pars.cc line 822
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
04:53:38 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=0
max_threads=214
thread_count=7
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 101418 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xe8f29b]
/usr/sbin/mysqld(handle_fatal_signal+0x489)[0x787029]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fb5700bc390]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7fb56f475428]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7fb56f47702a]
/usr/sbin/mysqld[0x75ca3e]
/usr/sbin/mysqld(_Z21pars_insert_statementP10sym_node_tPvP10sel_node_t+0x3a8)[0xf62a08]
/usr/sbin/mysqld(_Z7yyparsev+0x1227)[0x11839a7]
/usr/sbin/mysqld(_Z8pars_sqlP11pars_info_tPKc+0x9e)[0xf6425e]
/usr/sbin/mysqld(_Z13fts_parse_sqlP11fts_table_tP11pars_info_tPKc+0x190)[0x11635b0]
/usr/sbin/mysqld(_Z14fts_write_nodeP5trx_tPP10que_fork_tP11fts_table_tP12fts_string_tP10fts_node_t+0x292)[0x113ddb2]
/usr/sbin/mysqld[0x1141568]
/usr/sbin/mysqld(_Z14fts_sync_tableP12dict_table_tbbb+0x329)[0x11475b9]
/usr/sbin/mysqld(_Z23fts_optimize_sync_tablem+0x42)[0x114ef02]
/usr/sbin/mysqld(_Z19fts_optimize_threadPv+0x57c)[0x115845c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7fb5700b26ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fb56f54741d]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.

We recovered the replication by skipping the duplicate entries that were apparently already processed.

Revision history for this message
Robie Basak (racb) wrote :

Upstream consider this bug valid, so Triaged.

Changed in mysql-5.7 (Ubuntu):
status: Incomplete → Triaged
Revision history for this message
Josh Gitlin (jgitlinbt) wrote :
Download full text (3.7 KiB)

I am seeing this as well in percona-xtradb-cluster-server-5.7 (5.7.21-29.26-1.xenial) / mysqld Ver 5.7.21-20-57 for debian-linux-gnu on x86_64 (Percona XtraDB Cluster (GPL), Release rel20, Revision 1702aea, WSREP version 29.26, wsrep_29.26)

This appears related to: https://jira.percona.com/browse/PS-745 which is marked as closed

When executing large SQL scripts, often those with an ALTER TABLE (but not always) I experience t least one of the nodes in the cluster failing with the same stack trace:

2018-04-10 09:02:33 0x7f6701ffb700 InnoDB: Assertion failure in thread 140080391894784 in file pars0pars.cc line 822
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
16:02:33 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://jira.percona.com/projects/PXC/issues

key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=37
max_threads=151
thread_count=9
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 76198 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xeba1fb]
/usr/sbin/mysqld(handle_fatal_signal+0x499)[0x77f3d9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f67d05b1390]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7f67cf96a428]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7f67cf96c02a]
/usr/sbin/mysqld[0x74e57c]
/usr/sbin/mysqld(_Z21pars_insert_statementP10sym_node_tPvP10sel_node_t+0x3a8)[0xfaf188]
/usr/sbin/mysqld(_Z7yyparsev+0x1227)[0x11ddd17]
/usr/sbin/mysqld(_Z8pars_sqlP11pars_info_tPKc+0x9e)[0xfb09de]
/usr/sbin/mysqld(_Z13fts_parse_sqlP11fts_table_tP11pars_info_tPKc+0x198)[0x11bd978]
/usr/sbin/mysqld(_Z14fts_write_nodeP5trx_tPP10que_fork_tP11fts_table_tP12fts_string_tP10fts_node_t+0x292)[0x1198262]
/usr/sbin/mysqld[0x119b95b]
/usr/sbin/mysqld(_Z14fts_sync_tableP12dict_table_tbbb+0x351)[0x11a0d31]
/usr/sbin/mysqld(_Z23fts_optimize_sync_tablem+0x45)[0x11a8eb5]
/usr/sbin/mysqld(_Z19fts_optimize_threadPv+0x57c)[0x11b248c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f67d05a76ba]
/lib/x8...

Read more...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

After all this time the upstream case isn't resolved yet.
@Lars - is there something we/you could do to get back some traction on it - actually is it even reasonable in 2022 and mysql-8.0 times?

Because without that there is not much we can do for Ubuntu and I'm afraid that bug will just stay open forever which isn't helpful either :-/

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.