xtrabackup 2.3 does not respect innodb_log_file_size stored in backup-my.cnf
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Percona XtraBackup moved to https://jira.percona.com/projects/PXB | Status tracked in 2.4 | |||||
2.3 |
Fix Released
|
High
|
Sergei Glushchenko | |||
2.4 |
Fix Released
|
High
|
Sergei Glushchenko |
Bug Description
Test case:
###
MYSQLD_
innodb-
innodb-
"
start_server
xtrabackup --backup --target-
cat $topdir/
${XB_BIN} --prepare --target-
stop_server
rm -rf ${mysql_datadir}
xtrabackup --move-back --target-
start_server
cat ${MYSQLD_ERRFILE}
###
backup-my.cnf:
###
[mysqld]
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
###
From prepare log:
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_
xtrabackup: innodb_
xtrabackup: innodb_
xtrabackup: innodb_
xtrabackup: innodb_
...
InnoDB: Setting log file ./ib_logfile101 size to 48 MB
InnoDB: Setting log file ./ib_logfile1 size to 48 MB
InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
InnoDB: New log files created, LSN=1626007
###
From server startup log:
###
2015-12-17 13:43:57 52535 [Warning] InnoDB: Resizing redo log from 2*3072 to 4*128 pages, LSN=1626134
2015-12-17 13:43:57 52535 [Warning] InnoDB: Starting to delete and rewrite log files.
2015-12-17 13:43:57 52535 [Note] InnoDB: Setting log file ./ib_logfile101 size to 2 MB
2015-12-17 13:43:57 52535 [Note] InnoDB: Setting log file ./ib_logfile1 size to 2 MB
2015-12-17 13:43:57 52535 [Note] InnoDB: Setting log file ./ib_logfile2 size to 2 MB
2015-12-17 13:43:57 52535 [Note] InnoDB: Setting log file ./ib_logfile3 size to 2 MB
2015-12-17 13:43:57 52535 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2015-12-17 13:43:57 52535 [Warning] InnoDB: New log files created, LSN=1626134
###
I also hit this problem and it figured out, that this behavior also applies to percona cluster setups which is more critical in my opinion.
When a cluster node is restarted / first started and joins the cluster with a full sync then a innobackupex process is used to sync the data from a donor to the joiner.
Even in this case the prepare step just prepares the synced data with the default innodb settings (or even with the one which are stored under /etc/mysql/my.cnf) which differs from our live settings ...
Moreover we have more then one cluster node running on 1 dedicated server it might be that the innodb settings differs per instance.
So with the new behavior the full sync of a cluster node failed and the node was not able to join the cluster, which is pretty critical in production environments.
I fixed this while it happens with adding the correct settings to the /etc/mysql/my.cnf file, but this cannot be the final solution ...
Can you please take that in mind when checking this bug ...
Thanks
Steffen