xtrabackup 2.4.7 crashes while backing up MariaDB 10.2.x with ftwrl-* options
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Percona XtraBackup moved to https://jira.percona.com/projects/PXB | Status tracked in 2.4 | |||||
2.3 |
Fix Released
|
Medium
|
Sergei Glushchenko | |||
2.4 |
Fix Released
|
Medium
|
Sergei Glushchenko |
Bug Description
The following simple test performed on idle (no connections) almost "empty" instance of MariaDB 10.2.7 started with all defaults but port and datadir shows the problem:
openxs@ao756:~$ /usr/bin/xtrabackup --no-defaults --host=127.0.0.1 --port=3308 --datadir=
170716 13:16:52 version_check Connecting to MySQL server with DSN 'dbi:mysql:
170716 13:16:52 version_check Connected to MySQL server
170716 13:16:52 version_check Executing a version check against the server...
170716 13:16:52 version_check Done.
170716 13:16:52 Connecting to MySQL server host: 127.0.0.1, user: root, password: not set, port: 3308, socket: not set
Using server version 10.2.7-MariaDB-log
/usr/bin/xtrabackup version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 05f1fcf)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /home/openxs/
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_
xtrabackup: innodb_
xtrabackup: innodb_
xtrabackup: innodb_
xtrabackup: innodb_
InnoDB: Number of pools: 1
170716 13:16:52 >> log scanned up to (298878845)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 1 for mysql/innodb_
170716 13:16:52 [01] Compressing ./ibdata1 to /home/openxs/
170716 13:16:52 [01] ...done
170716 13:16:52 [01] Compressing ./mysql/
170716 13:16:52 [01] ...done
170716 13:16:53 [01] Compressing ./mysql/
170716 13:16:53 [01] ...done
170716 13:16:53 [01] Compressing ./mysql/
170716 13:16:53 [01] ...done
170716 13:16:53 [01] Compressing ./test/sbtest1.ibd to /home/openxs/
170716 13:16:53 >> log scanned up to (298878845)
170716 13:16:54 >> log scanned up to (298878845)
170716 13:16:55 >> log scanned up to (298878845)
170716 13:16:56 >> log scanned up to (298878845)
170716 13:16:57 >> log scanned up to (298878845)
170716 13:16:58 >> log scanned up to (298878845)
170716 13:16:59 >> log scanned up to (298878845)
170716 13:17:00 >> log scanned up to (298878845)
170716 13:17:00 [01] ...done
170716 13:17:00 [01] Compressing ./test/tt1.ibd to /home/openxs/
170716 13:17:00 [01] ...done
170716 13:17:00 [01] Compressing ./test/task.ibd to /home/openxs/
170716 13:17:00 [01] ...done
170716 13:17:00 [01] Compressing ./test/incident.ibd to /home/openxs/
170716 13:17:00 [01] ...done
170716 13:17:00 [01] Compressing ./test/tt2.ibd to /home/openxs/
170716 13:17:00 [01] ...done
170716 13:17:01 [01] Compressing ./test/t1i.ibd to /home/openxs/
170716 13:17:01 [01] ...done
170716 13:17:01 [01] Compressing ./test/t.ibd to /home/openxs/
170716 13:17:01 [01] ...done
170716 13:17:01 [01] Compressing ./test/bd.ibd to /home/openxs/
170716 13:17:01 [01] ...done
170716 13:17:01 [01] Compressing ./test/tt.ibd to /home/openxs/
170716 13:17:01 [01] ...done
170716 13:17:01 >> log scanned up to (298878845)
170716 13:17:01 Waiting 5 seconds for queries running longer than 300 seconds to finish
10:17:01 UTC - xtrabackup got signal 11 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
/usr/bin/
/usr/bin/
/lib/x86_
/lib/x86_
/usr/bin/
/usr/bin/
/usr/bin/
/usr/bin/
/lib/x86_
/usr/bin/
Please report a bug at https:/
openxs@ao756:~$
The problem is repeatable with 10.2.6, but is not repeatable with Percona Server 5.7.x or MariaDB 10.1.x.
Without these options everything works as expected:
openxs@ao756:~$ /usr/bin/xtrabackup --no-defaults --host=127.0.0.1 --port=3308 --datadir=
...
170716 13:21:50 [01] Compressing ./test/sbtest1.frm to /home/openxs/
170716 13:21:50 [01] ...done
170716 13:21:50 [01] Compressing ./test/db.opt to /home/openxs/
170716 13:21:50 [01] ...done
170716 13:21:50 [01] Compressing ./test/t1i.frm to /home/openxs/
170716 13:21:50 [01] ...done
170716 13:21:50 [01] Compressing ./performance_
170716 13:21:50 [01] ...done
170716 13:21:50 Finished backing up non-InnoDB tables and files
170716 13:21:50 [00] Compressing xtrabackup_
170716 13:21:50 [00] ...done
170716 13:21:50 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
xtrabackup: The latest check point (for incremental): '298878836'
170716 13:21:50 >> log scanned up to (298878845)
xtrabackup: Stopping log copying thread.
170716 13:21:50 Executing UNLOCK TABLES
170716 13:21:50 All tables unlocked
170716 13:21:50 [00] Compressing ib_buffer_pool to /home/openxs/
170716 13:21:50 [00] ...done
170716 13:21:50 Backup created in directory '/home/
MySQL binlog position: filename 'ao756-bin.000001', position '328', GTID of the last change ''
170716 13:21:50 [00] Compressing backup-my.cnf
170716 13:21:50 [00] ...done
170716 13:21:50 [00] Compressing xtrabackup_info
170716 13:21:50 [00] ...done
xtrabackup: Transaction log of lsn (298878836) to (298878845) was copied.
170716 13:21:50 completed OK!
tags: | added: mariadb |
Well, it is kind of incompatibility introduced by MariaDB. Have you ever seen SHOW PROCESSLIST returning NULL in TIME column? MariaDB does so.
MariaDB [(none)]> show processlist; ------- -----+- ------- ---+--- ---+--- ------+ ------+ ------- ------- ------- -----+- ------- ------- ---+--- ------- + ------- -----+- ------- ---+--- ---+--- ------+ ------+ ------- ------- ------- -----+- ------- ------- ---+--- ------- + ------- -----+- ------- ---+--- ---+--- ------+ ------+ ------- ------- ------- -----+- ------- ------- ---+--- ------- +
+----+-
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+-
| 1 | system user | | NULL | Daemon | NULL | InnoDB purge coordinator | NULL | 0.000 |
| 2 | system user | | NULL | Daemon | NULL | InnoDB purge worker | NULL | 0.000 |
| 3 | system user | | NULL | Daemon | NULL | InnoDB purge worker | NULL | 0.000 |
| 4 | system user | | NULL | Daemon | NULL | InnoDB purge worker | NULL | 0.000 |
| 5 | system user | | NULL | Daemon | NULL | InnoDB shutdown handler | NULL | 0.000 |
| 19 | root | localhost | NULL | Sleep | 156 | | NULL | 0.000 |
| 20 | root | localhost | NULL | Query | 0 | init | show processlist | 0.000 |
+----+-
7 rows in set (0.00 sec)
It is trivial to fix. But I wonder who would use these options having that MariaDB has support for backup locks?