After upgrading to 5.6.24 MYSQL_BIN_LOG::move_crash_safe_index_file_to_index_file failed to move crash_safe_index_file to index file.

Bug #1487162 reported by SK on 2015-08-20
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona Server
Undecided
Unassigned

Bug Description

When trying to upgrade from 5.5.24 to 5.6.24, the following error occurs. It is random, does not happen for all databases that have been upgraded

Some information regarding this:

- This is a slow clone of master that is still running 5.5.24.
- There are a few other slow clones on the same host and all of their binlogs and binlog index files are in the same logs directory (/path/to/mysql/logs). They are named mysql.<mysqlservername>.binlog.index
- Notice below how the error log reports MySQL checking for another server's binlog.

(full error log attached)

2015-08-13 12:35:42 8733 [Note] Failed to execute mysql_file_stat on file '/path/to/mysql/logs/binlog-tridoshic-clone.000003'
2015-08-13 12:38:54 8733 [Note] /path/to/mysql/therita/mysqld--therita: Normal shutdown

<started mysql server again>

2015-08-13 12:39:25 13252 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.24-72.2 started; log sequence number 22137234877
^G/path/to/mysql/therita/mysqld--therita: Error on rename of '/path/to/mysql/logs/mysql.index_crash_safe' to '/path/to/mysql/logs/mysql.therita.binlog.index' (Errcode: 2 - No such file or directory)
2015-08-13 12:39:25 13252 [ERROR] MYSQL_BIN_LOG::move_crash_safe_index_file_to_index_file failed to move crash_safe_index_file to index file.
2015-08-13 12:39:25 13252 [ERROR] MYSQL_BIN_LOG::add_log_to_index failed to move crash safe index file to index file.
19:39:25 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

When that error occurs, the binlog.index files are all clobbered - the files have other MySQL server's binlogs defined in their index files. I have to manually fix the binlog.index files and re-run the mysql_upgrade. It completes without issues then.

I am upgrading by adding some code to .preinst and .postinst scripts in our 5.6 package. So when a server is 5.5 and I install the 5.6 package, it automatically does the following:

- stops all the MySQL services on the host as part of the preinst script.
- Removes the 5.5 binaries and install 5.6 binaries.
- starts all the MySQL services, upgrades and restarts them in a loop as part of the postinst script. Here is the code snippet:

for SERVER in $SERVERS; do
# start database
/etc/init.d/mysqld start $SERVER

# give it some time to recover
echo "Waiting for MySQL service $SERVER to start, sleeping $SLEEP_TIME ..."
sleep $SLEEP_TIME
# do the upgrades
        echo "`date`: Starting upgrade of service $SERVER ..."
        until /path/to/mysql/base/bin/mysqladmin -uroot -p${MYSQL_PASS} -S /path/to/mysql/${SERVER}/mysql.${SERVER}.sock ping; do
                echo "MySQL service $SERVER not running yet, sleeping $SLEEP_TIME ..."
                sleep $SLEEP_TIME
        done
        /path/to/mysql/base/bin/mysql_upgrade --force -uroot -p${MYSQL_PASS} -S /path/to/mysql/${SERVER}/mysql.${SERVER}.sock
        echo "`date`: Upgrade of service $SERVER done."
        sleep $SLEEP_TIME

#restart database
echo "`date`: Restarting service $SERVER ..."
/etc/init.d/mysqld stop $SERVER
sleep $SLEEP_TIME
/etc/init.d/mysqld start $SERVER
sleep $SLEEP_TIME

done

Config file:

[mysqld_safe]
open_files_limit = 65535
[mysqld]
log-warnings
log-queries-not-using-indexes
innodb_file_per_table
bind-address = x.x.x.x
user = mysql
log-bin=/path/to/mysql/logs/binlog-therita
log-bin-index=/path/to/mysql/logs/mysql.therita.binlog.index
expire_logs_days=7
socket=/path/to/mysql/therita/mysql.therita.sock
slow_query_log=1
slow_query_log_file=/path/to/mysql/logs/mysql.therita.slow
log-error=/path/to/mysql/logs/mysql.therita.err
server-id=7794
replicate_wild_ignore_table=mysql.%
table_open_cache=2048
thread_cache_size=20
ft_min_word_len=3
innodb_buffer_pool_size=1000M
innodb_log_buffer_size=8M
innodb_log_file_size=512M
innodb_open_files=1000
default_storage_engine=InnoDB
join_buffer_size=1M
key_buffer_size=32M
max_allowed_packet=1G
max_connections=1200
max_connect_errors=10000
open_files_limit=65535
query_cache_limit=5M
query_cache_size=50M
read_buffer_size=1M
sort_buffer_size=1M
myisam_recover_options=BACKUP,FORCE
thread_cache_size=20
tmpdir=/path/to/mysql/tmp
wait_timeout=14400
# Mysql 5.5 options
innodb_stats_on_metadata=off
# Mysql 5.6 options
explicit_defaults_for_timestamp=1
performance_schema=OFF

SK (soumya-kandula) wrote :
SK (soumya-kandula) wrote :

Just an update, I've since moved the binary logs for each mysql instance to their own directories and have not seen this happen again.

I'd still think it is a bug that having all binary logs in same directory causes errors.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers