Yes it is a galera node. The command I'm running is "innobackupex --galera-info --parallel=4 --no-version-check --no-timestamp --user=backup --password=xxx --stream=xbstream ./ | nice pigz -Rc > backup_path". Looking at bug 1512281, I see that percona server has the "log_bin_basename" server variable which mariadb does not. That appears to be the source of my original problem. Would it be possible to read the binlog directory from "log-bin" in my.cnf if "log_bin_basename" does not exist instead of just assuming datadir? I will be submitting an issue for mariadb to add "log_bin_basename". I can post my my.cnf if you still want to see it, but I think we should have this figured out now. I have three servers in a galera cluster: db3 has binlogs disabled so I can get nightly backups. db2 has binlogs in the other directory via "log-bin=/seq/mysql", but I think the problem is as stated above. db1 has binlogs enabled in in the data dir by "log-bin=mysql-bin". I changed the binlog path to see if it would backup successfully with binlogs in the datadir. It also fails, but it looks like the same problem as in bug 1512281. Here is an excerpt from the log when it fails: 151203 01:32:47 Finished backing up non-InnoDB tables and files 151203 01:32:47 [00] Streaming xtrabackup_galera_info 151203 01:32:47 [00] ...done 151203 01:32:47 [00] Streaming /var/lib/mysql//mysql-bin.000090 to 151203 01:32:47 [00] ...done *** glibc detected *** innobackupex: double free or corruption (fasttop): 0x000000000145a1c0 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x75e66)[0x7f2de154be66] innobackupex(_Z25write_current_binlog_fileP8st_mysql+0x258)[0x59d678] innobackupex(_Z12backup_startv+0x177)[0x5a3fb7] innobackupex(_Z22xtrabackup_backup_funcv+0xc00)[0x58def0] innobackupex(main+0xaea)[0x591efa] /lib64/libc.so.6(__libc_start_main+0xfd)[0x7f2de14f4d5d] innobackupex[0x585e79] ... 07:32:47 UTC - xtrabackup got signal 6 ; This could be because you hit a bug or data is corrupted. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x10000 innobackupex(my_print_stacktrace+0x2e) [0x8c502e] innobackupex(handle_fatal_signal+0x273) [0x736dc3] /lib64/libpthread.so.0(+0xf710) [0x7f2de2eec710] /lib64/libc.so.6(gsignal+0x35) [0x7f2de1508625] /lib64/libc.so.6(abort+0x175) [0x7f2de1509e05] /lib64/libc.so.6(+0x70537) [0x7f2de1546537] /lib64/libc.so.6(+0x75e66) [0x7f2de154be66] innobackupex(write_current_binlog_file(st_mysql*)+0x258) [0x59d678] innobackupex(backup_start()+0x177) [0x5a3fb7] innobackupex(xtrabackup_backup_func()+0xc00) [0x58def0] innobackupex(main+0xaea) [0x591efa] /lib64/libc.so.6(__libc_start_main+0xfd) [0x7f2de14f4d5d] innobackupex() [0x585e79] I'm assuming that since this is a double free issue, it is the same as bug bug 1512281, not related to my original problem, and will be fixed in the next release.