InnoDB: Failing assertion: sym_node->table != NULL in 5.6.30-25.16

Bug #1598761 reported by Wa Dev
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Invalid
Undecided
Unassigned

Bug Description

On running two concurrent tasks where one is altering a table while the other is simultaneously trying to update it, the following error occurs in replication:

2016-07-04 09:57:30 7ef7c8ba0700 InnoDB: FTS Optimize Removing table $DB1$/#sql2-780c-11e1ee
2016-07-04 09:57:44 7ef7c8ba0700 InnoDB: FTS Optimize Removing table $DB2$/#sql2-780c-11e1ee
2016-07-04 09:57:51 7ef7c8ba0700 InnoDB: Assertion failure in thread 139602689656576 in file pars0pars.cc line 865
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
07:57:51 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=33554432
read_buffer_size=131072
max_used_connections=35
max_threads=1502
thread_count=20
connection_count=18
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 631171 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x90aaec]
/usr/sbin/mysqld(handle_fatal_signal+0x469)[0x68ac69]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0)[0x7f04bf67a8d0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f04bd600067]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f04bd601448]
/usr/sbin/mysqld[0xa2b96d]
/usr/sbin/mysqld(_Z7yyparsev+0x131b)[0xb6982b]
/usr/sbin/mysqld[0xa2cfef]
/usr/sbin/mysqld[0xb65f93]
/usr/sbin/mysqld[0xb52492]
/usr/sbin/mysqld[0xb52bf6]
/usr/sbin/mysqld(_Z23fts_optimize_sync_tablem+0x40)[0xb5e590]
/usr/sbin/mysqld[0xb5e89c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a4)[0x7f04bf6730a4]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f04bd6b387d]
You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
160704 09:57:52 mysqld_safe Number of processes running now: 0
160704 09:57:52 mysqld_safe WSREP: not restarting wsrep node automatically
160704 09:57:52 mysqld_safe mysqld from pid file /var/lib/mysql/rdb1.pid ended
160704 09:59:39 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
160704 09:59:39 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.rntVmz' --pid-file='/var/lib/mysql/rdb1-recover.pid'
2016-07-04 09:59:39 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-07-04 09:59:39 0 [Note] /usr/sbin/mysqld (mysqld 5.6.30-76.3-56-log) starting as process 13720 ...
160704 09:59:51 mysqld_safe WSREP: Recovered position c2468c46-c7df-11e4-9673-af44ab631c1f:29560613
Log of wsrep recovery (--wsrep-recover):
2016-07-04 09:59:39 13720 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
2016-07-04 09:59:39 13720 [Note] Plugin 'FEDERATED' is disabled.
2016-07-04 09:59:39 13720 [Note] InnoDB: Using atomics to ref count buffer pool pages
2016-07-04 09:59:39 13720 [Note] InnoDB: The InnoDB memory heap is disabled
2016-07-04 09:59:39 13720 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-07-04 09:59:39 13720 [Note] InnoDB: Memory barrier is not used
2016-07-04 09:59:39 13720 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-07-04 09:59:39 13720 [Note] InnoDB: Using Linux native AIO
2016-07-04 09:59:39 13720 [Note] InnoDB: Using CPU crc32 instructions
2016-07-04 09:59:39 13720 [Note] InnoDB: Initializing buffer pool, size = 47.0G
2016-07-04 09:59:40 13720 [Note] InnoDB: Completed initialization of buffer pool
2016-07-04 09:59:40 13720 [Note] InnoDB: Highest supported file format is Barracuda.
2016-07-04 09:59:40 13720 [Note] InnoDB: Log scan progressed past the checkpoint lsn 170464272836
2016-07-04 09:59:40 13720 [Note] InnoDB: Database was not shutdown normally!
2016-07-04 09:59:40 13720 [Note] InnoDB: Starting crash recovery.
2016-07-04 09:59:40 13720 [Note] InnoDB: Reading tablespace information from the .ibd files...
2016-07-04 09:59:42 13720 [Note] InnoDB: Restoring possible half-written data pages
2016-07-04 09:59:42 13720 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 170469515264
InnoDB: Doing recovery: scanned up to log sequence number 170474758144
InnoDB: Doing recovery: scanned up to log sequence number 170480001024
InnoDB: Doing recovery: scanned up to log sequence number 170485243904
InnoDB: Doing recovery: scanned up to log sequence number 170490486784
[..]

The schema names have been redacted but they are basically mirrored in structure with mainly similar data (not identical though).

It might worth noting that we did not experience similar stuff with 5.6.26-25.12.1, with which we used Xtrabackup v2 for replication (while the other releases in between were tested and deemed unacceptable for us due to different bugs since fixed).

This is a multi-master configuration with three masters with only one being connected to, and it is that one that went down, the other two "slave masters" were not taken down and were able to transfer state back (well, kind of) to the primary.

Revision history for this message
Krunal Bauskar (krunal-bauskar) wrote :

I tried to reproduce this scenario but with my use-case I couldn't.
Here is what I tried.

1. Bootup 2 nodes (n1, n2)
2. create table and load some x rows (x is large enough for update to take sometime)
3. Trigger parallel execution of UPDATE and ALTER on same table.

I don't see any crash.

I also see you tcs involves Full-text index.
If this problem is reproducible can you share the test-case.

Revision history for this message
Wa Dev (wadev-h) wrote :
Download full text (8.7 KiB)

Well, I can not provide every details since the actual case is confidential, but I will try to reproduce it in a test environment (since this just happened in production, I'd rather not stress the system). Due to the scale of the task this might take some time. But it seems that it does not occur at every time.

Here is the mentioned table's structure for a glance (without column names):

| int(11) | NO | PRI | NULL | auto_increment |
| varchar(32) | YES | MUL | NULL | |
| varchar(192) | YES | MUL | NULL | |
| varchar(1024) | YES | MUL | NULL | |
| text | YES | | NULL | |
| float(11,2) | YES | | NULL | |
| double(20,2) | YES | | NULL | |
| float(11,2) | NO | | 27.00 | |
| double | NO | | 0 | |
| int(11) | YES | MUL | NULL | |
| varchar(64) | YES | | NULL | |
| int(11) | NO | | 0 | |
| text | YES | | NULL | |
| int(11) | YES | | NULL | |
| varchar(16) | YES | | NULL | |
| int(11) | NO | | 0 | |
| bigint(20) unsigned | YES | MUL | NULL | |
| int(11) | YES | | 1 | |
| varchar(32) | YES | MUL | NULL | |
| bigint(20) unsigned | YES | | NULL | |
| int(10) unsigned | YES | | 0 | |
| float | NO | | 0 | |
| int(1) | NO | | 0 | |
| tinyint(1) unsigned | YES | | 0 | |
| tinyint(1) unsigned | YES | | 0 | |
| tinyint(1) unsigned | YES | | NULL | |
| tinyint(1) unsigned | YES | | 0 | |
| tinyint(3) unsigned | YES | | 0 | |
| double(20,2) | YES | | NULL | |
| double(20,2) | YES | | 0.00 | |
| double(20,2) | YES | | NULL | |
| timestamp | NO | | 0000-00-00 00:00:00 | |
| tinyint(3) unsigned | YES | | 0 | |
| int(11) | NO | | 0 | |
| int(1) | NO | | 0 | |
| int(1) | YES | | 0 | |
| tinyint(3) unsigned | YES | | 0 | ...

Read more...

summary: - InnoDB: Failing assertion: sym_node->table != NULL in 5.6.30-76.3
+ InnoDB: Failing assertion: sym_node->table != NULL in 5.6.30-25.16
Revision history for this message
Wa Dev (wadev-h) wrote :
Revision history for this message
Wa Dev (wadev-h) wrote :

Any ETA for a Cluster release based on the just released
https://www.percona.com/blog/2016/07/07/percona-server-5-6-31-77-0-now-available/
?

According to upstream, the bug is a regression that should already have been fixed in 5.6.31.

Revision history for this message
Viktor Csiky (strongholdmedia) wrote :

I backported the relevant changes from 5.6.31 upstream.
Now trying to find out how to integrate. :)

Revision history for this message
Krunal Bauskar (krunal-bauskar) wrote :

Given that it is upstream issue it will be fixed when PXC-5.6.31 is released.

PXC is based on 2 upstream PS-5.6.31 and Codership-5.6.31.
As of now we don't see any major changes on PXC front to do a release neither has upstream (Codership) done a release so it would be bit out of way to do it now.

Changed in percona-xtradb-cluster:
status: New → Invalid
Revision history for this message
Wa Dev (wadev-h) wrote :

Could you please contact me @ wadev at mailbox dot hu ?

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-1911

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.