InnoDB: Failing assertion: sym_node->table != NULL in 5.6.30-25.16
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC |
Invalid
|
Undecided
|
Unassigned |
Bug Description
On running two concurrent tasks where one is altering a table while the other is simultaneously trying to update it, the following error occurs in replication:
2016-07-04 09:57:30 7ef7c8ba0700 InnoDB: FTS Optimize Removing table $DB1$/#
2016-07-04 09:57:44 7ef7c8ba0700 InnoDB: FTS Optimize Removing table $DB2$/#
2016-07-04 09:57:51 7ef7c8ba0700 InnoDB: Assertion failure in thread 139602689656576 in file pars0pars.cc line 865
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://
InnoDB: about forcing recovery.
07:57:51 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https:/
key_buffer_
read_buffer_
max_used_
max_threads=1502
thread_count=20
connection_count=18
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/
/usr/sbin/
/lib/x86_
/lib/x86_
/lib/x86_
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/usr/sbin/
/lib/x86_
/lib/x86_
You may download the Percona XtraDB Cluster operations manual by visiting
http://
in the manual which will help you identify the cause of the crash.
160704 09:57:52 mysqld_safe Number of processes running now: 0
160704 09:57:52 mysqld_safe WSREP: not restarting wsrep node automatically
160704 09:57:52 mysqld_safe mysqld from pid file /var/lib/
160704 09:59:39 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
160704 09:59:39 mysqld_safe WSREP: Running position recovery with --log_error=
2016-07-04 09:59:39 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_
2016-07-04 09:59:39 0 [Note] /usr/sbin/mysqld (mysqld 5.6.30-76.3-56-log) starting as process 13720 ...
160704 09:59:51 mysqld_safe WSREP: Recovered position c2468c46-
Log of wsrep recovery (--wsrep-recover):
2016-07-04 09:59:39 13720 [Warning] Using unique option prefix myisam-recover instead of myisam-
2016-07-04 09:59:39 13720 [Note] Plugin 'FEDERATED' is disabled.
2016-07-04 09:59:39 13720 [Note] InnoDB: Using atomics to ref count buffer pool pages
2016-07-04 09:59:39 13720 [Note] InnoDB: The InnoDB memory heap is disabled
2016-07-04 09:59:39 13720 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-07-04 09:59:39 13720 [Note] InnoDB: Memory barrier is not used
2016-07-04 09:59:39 13720 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-07-04 09:59:39 13720 [Note] InnoDB: Using Linux native AIO
2016-07-04 09:59:39 13720 [Note] InnoDB: Using CPU crc32 instructions
2016-07-04 09:59:39 13720 [Note] InnoDB: Initializing buffer pool, size = 47.0G
2016-07-04 09:59:40 13720 [Note] InnoDB: Completed initialization of buffer pool
2016-07-04 09:59:40 13720 [Note] InnoDB: Highest supported file format is Barracuda.
2016-07-04 09:59:40 13720 [Note] InnoDB: Log scan progressed past the checkpoint lsn 170464272836
2016-07-04 09:59:40 13720 [Note] InnoDB: Database was not shutdown normally!
2016-07-04 09:59:40 13720 [Note] InnoDB: Starting crash recovery.
2016-07-04 09:59:40 13720 [Note] InnoDB: Reading tablespace information from the .ibd files...
2016-07-04 09:59:42 13720 [Note] InnoDB: Restoring possible half-written data pages
2016-07-04 09:59:42 13720 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 170469515264
InnoDB: Doing recovery: scanned up to log sequence number 170474758144
InnoDB: Doing recovery: scanned up to log sequence number 170480001024
InnoDB: Doing recovery: scanned up to log sequence number 170485243904
InnoDB: Doing recovery: scanned up to log sequence number 170490486784
[..]
The schema names have been redacted but they are basically mirrored in structure with mainly similar data (not identical though).
It might worth noting that we did not experience similar stuff with 5.6.26-25.12.1, with which we used Xtrabackup v2 for replication (while the other releases in between were tested and deemed unacceptable for us due to different bugs since fixed).
This is a multi-master configuration with three masters with only one being connected to, and it is that one that went down, the other two "slave masters" were not taken down and were able to transfer state back (well, kind of) to the primary.
I tried to reproduce this scenario but with my use-case I couldn't.
Here is what I tried.
1. Bootup 2 nodes (n1, n2)
2. create table and load some x rows (x is large enough for update to take sometime)
3. Trigger parallel execution of UPDATE and ALTER on same table.
I don't see any crash.
I also see you tcs involves Full-text index.
If this problem is reproducible can you share the test-case.