xtrabackup's --parallel option asserts / crashes with a value of -1

Bug #884737 reported by Patrick Crews on 2011-11-01
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Percona XtraBackup moved to https://jira.percona.com/projects/PXB
Fix Released
Alexey Kopytov
Fix Released
Alexey Kopytov

Bug Description

Passing a value of -1 to --parallel results in a nasty little crash. Admittedly this is a bit of a gotcha, but I had to test it ; )
innobackupex: Starting ibbackup with command: /percona-xtrabackup/mysql-5.5/storage/innobase/xtrabackup/xtrabackup_innodb55 --defaults-file="/dbqp/workdir/bot0/s0/var/my.cnf" --backup --suspend-at-end --target-dir=/dbqp/workdir/bot0/s0/var/_xtrabackup --parallel=-1
20111101-091107 innobackupex: Waiting for ibbackup (pid=30593) to suspend
20111101-091107 innobackupex: Suspend file '/dbqp/workdir/bot0/s0/var/_xtrabackup/xtrabackup_suspended'
20111101-091107 Warning: option 'parallel': unsigned value 18446744073709551615 adjusted to 4294967295
20111101-091107 xtrabackup: uses posix_fadvise().
20111101-091107 xtrabackup: cd to /dbqp/workdir/bot0/s0/var/master-data
20111101-091107 xtrabackup: Target instance is assumed as followings.
20111101-091107 xtrabackup: innodb_data_home_dir = ./
20111101-091107 xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
20111101-091107 xtrabackup: innodb_log_group_home_dir = ./
20111101-091107 xtrabackup: innodb_log_files_in_group = 2
20111101-091107 xtrabackup: innodb_log_file_size = 5242880
20111101-091107 111101 9:11:04 InnoDB: Using Linux native AIO
20111101-091107 >> log scanned up to (1595675)
20111101-091107 111101 9:11:04 InnoDB: Assertion failure in thread 140139762509600 in file /percona-xtrabackup/mysql-5.5/storage/innobase/ut/ut0mem.c line 107
20111101-091107 InnoDB: Failing assertion: ret || !assert_on_error
20111101-091107 InnoDB: We intentionally generate a memory trap.
20111101-091107 InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
20111101-091107 InnoDB: If you get repeated assertion failures or crashes, even
20111101-091107 InnoDB: immediately after the mysqld startup, there may be
20111101-091107 InnoDB: corruption in the InnoDB tablespace. Please refer to
20111101-091107 InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
20111101-091107 InnoDB: about forcing recovery.
20111101-091107 /percona-xtrabackup/mysql-5.5/storage/innobase/xtrabackup/xtrabackup_innodb55 version 1.6.3 for MySQL server 5.5.10 Linux (x86_64) (revision id: undefined)
20111101-091107 xtrabackup: Starting 4294967295 threads for parallel data files transfer
20111101-091107 Aborted
20111101-091107 innobackupex: Error: ibbackup child process has died at /percona-xtrabackup/innobackupex line 354.

Related branches

Patrick Crews (patrick-crews) wrote :

To repeat via dbqp:
 ./dbqp.py --default-server-type=mysql --basedir=/mysql-5.5 --xtrabackup-path=/percona-xtrabackup/mysql-5.5/storage/innobase/xtrabackup/xtrabackup_innodb55 --innobackupex-path=/percona-xtrabackup/innobackupex --suite=xtrabackup_basic bug884737_test

Patrick Crews (patrick-crews) wrote :

--parallel=0 works a-ok.
When it is 0, the option seems to be dropped from the xtrabackup call.
We might wish to document this, even though nobody should ever *really* try this (except devious QA guys...) : )

Stewart Smith (stewart) on 2011-11-15
Changed in percona-xtrabackup:
status: New → Fix Committed
assignee: nobody → Alexey Kopytov (akopytov)
importance: Undecided → Low
milestone: none → 1.6.4
Changed in percona-xtrabackup:
status: Fix Committed → Fix Released
Download full text (3.3 KiB)

Affected with --parallel=1000000000000 option, with xtrabackup 2.3.2:

[root@node1 mysql]# innobackupex --defaults-file=/etc/my.cnf --user=root --password=12345 --port=3306 --socket=/var/lib/mysql/mysql.sock --parallel=1000000000000 /home/MySQL-AutoXtraBackup/backup_dir/full/
Warning: option 'parallel': signed value 1000000000000 adjusted to 2147483647
151026 11:13:36 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
           At the end of a successful backup run innobackupex
           prints "completed OK!".

151026 11:13:36 version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;port=3306;mysql_socket=/var/lib/mysql/mysql.sock' as 'root' (using password: YES).
151026 11:13:36 version_check Connected to MySQL server
151026 11:13:36 version_check Executing a version check against the server...
151026 11:13:36 version_check Done.
151026 11:13:36 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: /var/lib/mysql/mysql.sock
Using server version 5.6.26-74.0-log
innobackupex version 2.3.2 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 306a2e0)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 50331648
151026 11:13:36 >> log scanned up to (1223269683)
xtrabackup: Generating a list of tablespaces
xtrabackup: Starting 2147483647 threads for parallel data files transfer
2015-10-26 11:13:37 7f938241e740 InnoDB: Assertion failure in thread 140271522277184 in file ut0mem.cc line 105
InnoDB: Failing assertion: ret || !assert_on_error
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
07:13:37 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
innobackupex(my_print_stacktrace+0x2e) [0x927d2e]
innobackupex(handle_fatal_signal+0x262) [0x7214b2]
/lib64/libpthread.so.0(+0xf130) [0x7f9381ffe130]
/lib64/libc.so.6(gsignal+0x37) [0x7f938076b5d7]
/lib64/libc.so.6(abort+0x148) [0x7f938076ccc8]
innobackupex(ut_malloc_low(unsigned lo...


Well, this error is simply out of memory. That is how it handled by InnoDB.

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXB-837

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers