...
2014-09-21 08:59:11 22282 [Note] WSREP: Requesting state transfer: success, donor: 0
WSREP_SST: [INFO] Proceeding with SST (20140921 08:59:12.250)
WSREP_SST: [INFO] Cleaning the existing datadir (20140921 08:59:12.252)
removed `/home/dbennett/work/1/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/data/gvwstate.dat'
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:23038,reuseaddr stdio | gzip -dc | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20140921 08:59:12.263)
2014-09-21 08:59:13 22282 [Note] WSREP: (144e68c9, 'tcp://127.0.0.1:31049') turning message relay requesting off
2014-09-21 08:59:28 22282 [Note] WSREP: 0.0 (75fde48ae41e): State transfer to 1.0 (75fde48ae41e) complete.
2014-09-21 08:59:28 22282 [Note] WSREP: Member 0.0 (75fde48ae41e) synced with group.
WSREP_SST: [INFO] Index compaction detected (20140921 08:59:41.253)
WSREP_SST: [INFO] Rebuilding during prepare with 4 threads (20140921 08:59:41.261)
WSREP_SST: [INFO] Compressed qpress files found (20140921 08:59:41.266)
WSREP_SST: [INFO] Decompression with 12 threads (20140921 08:59:41.269)
WSREP_SST: [INFO] Evaluating find /home/dbennett/work/1/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/data/ -type f -name '*.qp' -printf '%p\n%h\n' | xargs -n 2 qpress -T12d (2014
0921 08:59:41.272)
WSREP_SST: [INFO] Removing qpress files after decompression (20140921 08:59:41.315)
WSREP_SST: [INFO] Preparing the backup at /home/dbennett/work/1/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/data/ (20140921 08:59:41.318)
WSREP_SST: [INFO] Evaluating innobackupex --no-version-check --apply-log $rebuildcmd ${DATA} &>/home/dbennett/logs/innobackup.prepare.14092108591411304351.log (20140921 08:59:41.320)
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20140921 08:59:45.373)
WSREP_SST: [INFO] Removing the sst_in_progress file (20140921 08:59:45.375)
2014-09-21 08:59:45 22282 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '127.0.0.1:23038' --auth 'root:password' --datadir '/home/dbennett/work/1/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/data/' --defaults-file '/home/dbennett/work/1/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/my.cnf' --parent '22282' '' : 1 (Operation not permitted)
2014-09-21 08:59:45 22282 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2014-09-21 08:59:45 22282 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2014-09-21 08:59:45 22282 [ERROR] Aborting
==== innobackup.prepare.log
...
[02] Checking if there are indexes to rebuild in table sbtest/sbtest1 (space id: 6)
[02] Found index k_1
[02] Rebuilding 1 index(es).
2014-09-21 08:59:45 7fa69c5d1940 InnoDB: Assertion failure in thread 140353564645696 in file lock0lock.cc line 4089
InnoDB: Failing assertion: table->n_ref_count > 0 || !table->can_be_evicted
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
12:59:45 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x32) [0xba05cd]
xtrabackup(handle_fatal_signal+0x335) [0xb57021]
/lib64/libpthread.so.0 [0x7fa6ad427ca0]
/lib64/libc.so.6(gsignal+0x35) [0x7fa6abca3fc5]
/lib64/libc.so.6(abort+0x110) [0x7fa6abca5a70]
xtrabackup [0x7217cf]
xtrabackup(lock_table(unsigned long, dict_table_t*, lock_mode, que_thr_t*)+0x15b) [0x726339]
xtrabackup(row_merge_lock_table(trx_t*, dict_table_t*, lock_mode)+0x13b) [0x7788b9]
xtrabackup [0x5f28a0]
xtrabackup [0x5f2da0]
/lib64/libpthread.so.0 [0x7fa6ad41f83d]
/lib64/libc.so.6(clone+0x6d) [0x7fa6abd48fcd]
This same issue is causing intermittent PXC SST failure when the [xtrabackup] compact option is set.
==== Centos 5 x86_64
==== PXC
jenkins build Percona- XtraDB- Cluster- 5.6.20- rel68.0- 25.7.886. Linux.x86_ 64.tar. gz
==== xtrabackup 2.2 (revno: 5022)
-DCMAKE_ CXX_FLAGS= -m64 C_FLAGS= -m64
-DCMAKE_
-DWITH_DEBUG=1
==== MTR test
percona- xtradb- cluster- tests/t/ xb_galera_ sst_advanced- v2.sh
==== percona- xtradb- cluster- tests/conf/ conf2.cnf- node1 (bug occurs with any 'compat' conf)
# compress+compact + stream-compress
# v2only
[xtrabackup]
compress
compact
compress-threads=4
rebuild-threads=4
[sst]
compressor="gzip"
streamfmt=xbstream
transferfmt=socat
==== MTR log
... dbennett/ work/1/ Percona- XtraDB- Cluster- 5.6.20- rel68.0- 25.7.886. Linux.x86_ 64/percona- xtradb- cluster- tests/var/ w1/var901/ data/gvwstate. dat' 23038,reuseaddr stdio | gzip -dc | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20140921 08:59:12.263) 127.0.0. 1:31049' ) turning message relay requesting off work/1/ Percona- XtraDB- Cluster- 5.6.20- rel68.0- 25.7.886. Linux.x86_ 64/percona- xtradb- cluster- tests/var/ w1/var901/ data/ -type f -name '*.qp' -printf '%p\n%h\n' | xargs -n 2 qpress -T12d (2014 work/1/ Percona- XtraDB- Cluster- 5.6.20- rel68.0- 25.7.886. Linux.x86_ 64/percona- xtradb- cluster- tests/var/ w1/var901/ data/ (20140921 08:59:41.318) dbennett/ logs/innobackup .prepare. 140921085914113 04351.log (20140921 08:59:41.320) xtrabackup- v2 --role 'joiner' --address '127.0.0.1:23038' --auth 'root:password' --datadir '/home/ dbennett/ work/1/ Percona- XtraDB- Cluster- 5.6.20- rel68.0- 25.7.886. Linux.x86_ 64/percona- xtradb- cluster- tests/var/ w1/var901/ data/' --defaults-file '/home/ dbennett/ work/1/ Percona- XtraDB- Cluster- 5.6.20- rel68.0- 25.7.886. Linux.x86_ 64/percona- xtradb- cluster- tests/var/ w1/var901/ my.cnf' --parent '22282' '' : 1 (Operation not permitted)
2014-09-21 08:59:11 22282 [Note] WSREP: Requesting state transfer: success, donor: 0
WSREP_SST: [INFO] Proceeding with SST (20140921 08:59:12.250)
WSREP_SST: [INFO] Cleaning the existing datadir (20140921 08:59:12.252)
removed `/home/
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:
2014-09-21 08:59:13 22282 [Note] WSREP: (144e68c9, 'tcp://
2014-09-21 08:59:28 22282 [Note] WSREP: 0.0 (75fde48ae41e): State transfer to 1.0 (75fde48ae41e) complete.
2014-09-21 08:59:28 22282 [Note] WSREP: Member 0.0 (75fde48ae41e) synced with group.
WSREP_SST: [INFO] Index compaction detected (20140921 08:59:41.253)
WSREP_SST: [INFO] Rebuilding during prepare with 4 threads (20140921 08:59:41.261)
WSREP_SST: [INFO] Compressed qpress files found (20140921 08:59:41.266)
WSREP_SST: [INFO] Decompression with 12 threads (20140921 08:59:41.269)
WSREP_SST: [INFO] Evaluating find /home/dbennett/
0921 08:59:41.272)
WSREP_SST: [INFO] Removing qpress files after decompression (20140921 08:59:41.315)
WSREP_SST: [INFO] Preparing the backup at /home/dbennett/
WSREP_SST: [INFO] Evaluating innobackupex --no-version-check --apply-log $rebuildcmd ${DATA} &>/home/
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20140921 08:59:45.373)
WSREP_SST: [INFO] Removing the sst_in_progress file (20140921 08:59:45.375)
2014-09-21 08:59:45 22282 [ERROR] WSREP: Process completed with error: wsrep_sst_
2014-09-21 08:59:45 22282 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2014-09-21 08:59:45 22282 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2014-09-21 08:59:45 22282 [ERROR] Aborting
==== innobackup. prepare. log
... >can_be_ evicted bugs.mysql. com. dev.mysql. com/doc/ refman/ 5.6/en/ forcing- innodb- recovery. html
[02] Checking if there are indexes to rebuild in table sbtest/sbtest1 (space id: 6)
[02] Found index k_1
[02] Rebuilding 1 index(es).
2014-09-21 08:59:45 7fa69c5d1940 InnoDB: Assertion failure in thread 140353564645696 in file lock0lock.cc line 4089
InnoDB: Failing assertion: table->n_ref_count > 0 || !table-
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://
InnoDB: about forcing recovery.
12:59:45 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Thread pointer: 0x0 my_print_ stacktrace+ 0x32) [0xba05cd] handle_ fatal_signal+ 0x335) [0xb57021] libpthread. so.0 [0x7fa6ad427ca0] libc.so. 6(gsignal+ 0x35) [0x7fa6abca3fc5] libc.so. 6(abort+ 0x110) [0x7fa6abca5a70] lock_table( unsigned long, dict_table_t*, lock_mode, que_thr_t*)+0x15b) [0x726339] row_merge_ lock_table( trx_t*, dict_table_t*, lock_mode)+0x13b) [0x7788b9] libpthread. so.0 [0x7fa6ad41f83d] libc.so. 6(clone+ 0x6d) [0x7fa6abd48fcd]
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(
xtrabackup(
/lib64/
/lib64/
/lib64/
xtrabackup [0x7217cf]
xtrabackup(
xtrabackup(
xtrabackup [0x5f28a0]
xtrabackup [0x5f2da0]
/lib64/
/lib64/