After upgrading to 5.6.24 MYSQL_BIN_LOG::move_crash_safe_index_file_to_index_file failed to move crash_safe_index_file to index file.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona Server moved to https://jira.percona.com/projects/PS |
New
|
Undecided
|
Unassigned |
Bug Description
When trying to upgrade from 5.5.24 to 5.6.24, the following error occurs. It is random, does not happen for all databases that have been upgraded
Some information regarding this:
- This is a slow clone of master that is still running 5.5.24.
- There are a few other slow clones on the same host and all of their binlogs and binlog index files are in the same logs directory (/path/
- Notice below how the error log reports MySQL checking for another server's binlog.
(full error log attached)
2015-08-13 12:35:42 8733 [Note] Failed to execute mysql_file_stat on file '/path/
2015-08-13 12:38:54 8733 [Note] /path/to/
<started mysql server again>
2015-08-13 12:39:25 13252 [Note] InnoDB: Percona XtraDB (http://
^G/path/
2015-08-13 12:39:25 13252 [ERROR] MYSQL_BIN_
2015-08-13 12:39:25 13252 [ERROR] MYSQL_BIN_
19:39:25 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://
When that error occurs, the binlog.index files are all clobbered - the files have other MySQL server's binlogs defined in their index files. I have to manually fix the binlog.index files and re-run the mysql_upgrade. It completes without issues then.
I am upgrading by adding some code to .preinst and .postinst scripts in our 5.6 package. So when a server is 5.5 and I install the 5.6 package, it automatically does the following:
- stops all the MySQL services on the host as part of the preinst script.
- Removes the 5.5 binaries and install 5.6 binaries.
- starts all the MySQL services, upgrades and restarts them in a loop as part of the postinst script. Here is the code snippet:
for SERVER in $SERVERS; do
# start database
/etc/init.d/mysqld start $SERVER
# give it some time to recover
echo "Waiting for MySQL service $SERVER to start, sleeping $SLEEP_TIME ..."
sleep $SLEEP_TIME
# do the upgrades
echo "`date`: Starting upgrade of service $SERVER ..."
until /path/to/
done
echo "`date`: Upgrade of service $SERVER done."
sleep $SLEEP_TIME
#restart database
echo "`date`: Restarting service $SERVER ..."
/etc/init.d/mysqld stop $SERVER
sleep $SLEEP_TIME
/etc/init.d/mysqld start $SERVER
sleep $SLEEP_TIME
done
Config file:
[mysqld_safe]
open_files_limit = 65535
[mysqld]
log-warnings
log-queries-
innodb_
bind-address = x.x.x.x
user = mysql
log-bin=
log-bin-
expire_logs_days=7
socket=
slow_query_log=1
slow_query_
log-error=
server-id=7794
replicate_
table_open_
thread_
ft_min_word_len=3
innodb_
innodb_
innodb_
innodb_
default_
join_buffer_size=1M
key_buffer_size=32M
max_allowed_
max_connections
max_connect_
open_files_
query_cache_
query_cache_
read_buffer_size=1M
sort_buffer_size=1M
myisam_
thread_
tmpdir=
wait_timeout=14400
# Mysql 5.5 options
innodb_
# Mysql 5.6 options
explicit_
performance_
Just an update, I've since moved the binary logs for each mysql instance to their own directories and have not seen this happen again.
I'd still think it is a bug that having all binary logs in same directory causes errors.