The same issue exists in redhat. I can use LD_PRELOAD to get around the seg fault on the backup but the restore fails no matter what I set LD_PRELOAD to, percona versions and mysql versions, of libmysqlclient.so. Using the following versions: Red Hat Enterprise Linux Server release 5.10 (Tikanga) MySQL-client.x86_64 5.5.31-2.rhel5 installed MySQL-devel.x86_64 5.5.31-2.rhel5 installed MySQL-python.x86_64 1.2.3-0.1.c1.el5 installed MySQL-server.x86_64 5.5.31-2.rhel5 installed MySQL-shared.x86_64 5.5.31-2.rhel5 installed MySQL-shared-compat.x86_64 5.5.31-2.rhel5 installed perl-DBD-MySQL.x86_64 3.0007-2.el5 installed percona-xtrabackup.x86_64 2.2.8-5059.el5 installed Restore command: innobackupex --apply-log --use-memory=75G /data/mysql_data/3306 --defaults-file=/etc/my.cnf Error on restore: InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: In a MySQL replication slave the last master binlog file InnoDB: position 0 395505567, file name 81006-db03a-log.001353 InnoDB: Last MySQL binlog file position 0 220080926, file name /data/mysql_logs/3306/81006-db03b-bin.001348 InnoDB: 128 rollback segment(s) are active. InnoDB: Waiting for purge to start 15:21:07 UTC - xtrabackup got signal 11 ; This could be because you hit a bug or data is corrupted. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x10000 xtrabackup(my_print_stacktrace+0x32) [0xa40e5d] xtrabackup(handle_fatal_signal+0x335) [0x9158b1] /lib64/libpthread.so.0 [0x30f2e0eca0] xtrabackup(THD::get_stmt_da()+0xc) [0x7b2cca] xtrabackup(THD::raise_condition(unsigned int, char const*, Sql_condition::enum_warning_level, char const*)+0x23) [0x902435] xtrabackup(push_warning(THD*, Sql_condition::enum_warning_level, unsigned int, char const*)+0x3e) [0x92a072] xtrabackup(push_warning_printf(THD*, Sql_condition::enum_warning_level, unsigned int, char const*, ...)+0x112) [0x92a186] xtrabackup(ib_warn_row_too_big(dict_table_t const*)+0xb3) [0x698fbf] xtrabackup(dict_index_add_to_cache(dict_table_t*, dict_index_t*, unsigned long, unsigned long)+0x170) [0x7117ea] xtrabackup [0x6862cb] xtrabackup(dict_load_table(char const*, unsigned long, dict_err_ignore_t)+0x4b2) [0x684936] xtrabackup(dict_load_table_on_id(unsigned long, dict_err_ignore_t)+0x1a8) [0x686c4c] xtrabackup [0x713797] xtrabackup(dict_table_open_on_id(unsigned long, unsigned long, dict_table_op_t)+0x5a) [0x713b4c] xtrabackup [0x74db93] xtrabackup [0x74ebbe] xtrabackup(row_purge_step(que_thr_t*)+0x11a) [0x74ecde] xtrabackup [0x706645] xtrabackup [0x7067d0] xtrabackup(que_run_threads(que_thr_t*)+0x56) [0x7068fe] xtrabackup(trx_purge(unsigned long, unsigned long, bool)+0x2a1) [0x6c7a09] xtrabackup [0x6df2ab] xtrabackup(srv_purge_coordinator_thread+0x1d7) [0x6e034b] /lib64/libpthread.so.0 [0x30f2e0683d] /lib64/libc.so.6(clone+0x6d) [0x30f22d526d] Please report a bug at https://bugs.launchpad.net/percona-xtrabackup innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2633 main::apply_log() called at /usr/bin/innobackupex line 1561 innobackupex: Error: innobackupex: ibbackup failed at /usr/bin/innobackupex line 2633.