Xtrabackup 2.4 fails to backup large databases on 32bit platforms

Bug #1602537 reported by tim
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Percona XtraBackup moved to https://jira.percona.com/projects/PXB
Status tracked in 2.4
2.4
Fix Released
High
Vasily Nemkov

Bug Description

I use percona extra backup 2.4 on ubuntu 12.04 32 bit. mysql version 5.5 with multiple table space for innodb

MySql server is still up and running. just not able to do backup with innobackupex

simple command
sudo innobackupex --user=xxxx --password=xxx /Backup

and get the following error. Please check and help me

160713 02:48:57 [01] Copying ./shard1068/DB_TEXT_NEW.ibd to /home/tim/DataBk/2016-07-13_02-47-26/shard1068/DB_TEXT_NEW.ibd
160713 02:48:58 >> log scanned up to (128955499158)
160713 02:48:59 >> log scanned up to (128955499158)
160713 02:49:00 >> log scanned up to (128955499158)
160713 02:49:01 >> log scanned up to (128955499158)
160713 02:49:02 >> log scanned up to (128955499158)
160713 02:49:03 >> log scanned up to (128955499158)
160713 02:49:04 >> log scanned up to (128955499158)
160713 02:49:05 >> log scanned up to (128955499158)
160713 02:49:06 >> log scanned up to (128955499158)
160713 02:49:07 >> log scanned up to (128955499158)
160713 02:49:08 >> log scanned up to (128955499158)
160713 02:49:09 >> log scanned up to (128955499158)
160713 02:49:10 >> log scanned up to (128955499158)
160713 02:49:11 >> log scanned up to (128955499158)
160713 02:49:12 >> log scanned up to (128955499158)
160713 02:49:13 >> log scanned up to (128955499158)
160713 02:49:14 >> log scanned up to (128955499158)
160713 02:49:15 >> log scanned up to (128955499158)
160713 02:49:16 >> log scanned up to (128955499158)
160713 02:49:17 >> log scanned up to (128955499158)
160713 02:49:18 >> log scanned up to (128955499158)
160713 02:49:19 >> log scanned up to (128955499158)
160713 02:49:20 >> log scanned up to (128955499158)
160713 02:49:21 >> log scanned up to (128955499158)
160713 02:49:22 >> log scanned up to (128955499158)
160713 02:49:23 >> log scanned up to (128955499158)
160713 02:49:24 >> log scanned up to (128955499158)
160713 02:49:25 >> log scanned up to (128955499158)
160713 02:49:26 >> log scanned up to (128955499158)
160713 02:49:27 >> log scanned up to (128955499158)
160713 02:49:28 >> log scanned up to (128955499158)
160713 02:49:29 >> log scanned up to (128955499158)
160713 02:49:30 >> log scanned up to (128955499158)
160713 02:49:31 >> log scanned up to (128955499158)
160713 02:49:32 >> log scanned up to (128955499158)
160713 02:49:33 >> log scanned up to (128955499158)
160713 02:49:34 >> log scanned up to (128955499158)
160713 02:49:35 >> log scanned up to (128955499158)
160713 02:49:36 >> log scanned up to (128955499158)
160713 02:49:37 >> log scanned up to (128955499158)
160713 02:49:38 >> log scanned up to (128955499158)
160713 02:49:39 >> log scanned up to (128955499158)
160713 02:49:40 >> log scanned up to (128955499158)
160713 02:49:41 >> log scanned up to (128955499158)
160713 02:49:42 >> log scanned up to (128955499158)
160713 02:49:43 >> log scanned up to (128955499158)
160713 02:49:44 >> log scanned up to (128955499158)
160713 02:49:45 >> log scanned up to (128955499158)
160713 02:49:46 >> log scanned up to (128955499158)
160713 02:49:47 >> log scanned up to (128955499158)
160713 02:49:48 >> log scanned up to (128955499158)
160713 02:49:49 >> log scanned up to (128955499158)
160713 02:49:50 >> log scanned up to (128955499158)
160713 02:49:51 >> log scanned up to (128955500336)
160713 02:49:52 >> log scanned up to (128955500336)
160713 02:49:53 >> log scanned up to (128955500336)
160713 02:49:54 >> log scanned up to (128955500626)
160713 02:49:55 >> log scanned up to (128955500626)
160713 02:49:56 >> log scanned up to (128955500626)
160713 02:49:57 >> log scanned up to (128955500626)
160713 02:49:58 >> log scanned up to (128955500626)
160713 02:49:59 >> log scanned up to (128955500626)
160713 02:50:00 >> log scanned up to (128955500626)
160713 02:50:01 >> log scanned up to (128955500626)
160713 02:50:02 >> log scanned up to (128955500626)
160713 02:50:03 >> log scanned up to (128955500626)
160713 02:50:04 >> log scanned up to (128955500626)
160713 02:50:05 >> log scanned up to (128955500626)
160713 02:50:06 >> log scanned up to (128955500626)
160713 02:50:07 >> log scanned up to (128955500626)
160713 02:50:08 >> log scanned up to (128955500626)
160713 02:50:09 >> log scanned up to (128955500626)
160713 02:50:10 >> log scanned up to (128955500626)
160713 02:50:11 >> log scanned up to (128955500626)
160713 02:50:12 >> log scanned up to (128955500626)
160713 02:50:13 >> log scanned up to (128955500626)
160713 02:50:14 >> log scanned up to (128955500626)
160713 02:50:15 >> log scanned up to (128955500626)
160713 02:50:16 >> log scanned up to (128955500626)
160713 02:50:17 >> log scanned up to (128955500626)
160713 02:50:18 >> log scanned up to (128955500626)
160713 02:50:19 >> log scanned up to (128955500626)
160713 02:50:20 >> log scanned up to (128955500626)
160713 02:50:21 >> log scanned up to (128955500626)
160713 02:50:22 >> log scanned up to (128955500626)
160713 02:50:23 >> log scanned up to (128955500626)
160713 02:50:24 >> log scanned up to (128955500626)
160713 02:50:25 >> log scanned up to (128955500626)
160713 02:50:26 >> log scanned up to (128955500626)
160713 02:50:27 >> log scanned up to (128955500626)
160713 02:50:28 >> log scanned up to (128955500626)
160713 02:50:29 >> log scanned up to (128955500626)
160713 02:50:30 >> log scanned up to (128955500626)
160713 02:50:31 >> log scanned up to (128955500626)
160713 02:50:32 >> log scanned up to (128955500626)
160713 02:50:33 >> log scanned up to (128955500626)
160713 02:50:34 >> log scanned up to (128955500626)
160713 02:50:35 >> log scanned up to (128955500626)
160713 02:50:36 >> log scanned up to (128955500626)
160713 02:50:37 >> log scanned up to (128955500626)
160713 02:50:38 >> log scanned up to (128955500626)
160713 02:50:39 >> log scanned up to (128955500626)
160713 02:50:40 >> log scanned up to (128955500626)
160713 02:50:41 >> log scanned up to (128955500626)
160713 02:50:42 >> log scanned up to (128955500626)
160713 02:50:43 >> log scanned up to (128955500626)
2016-07-13 02:50:44 0x908fab40 InnoDB: Assertion failure in thread 2425334592 in file os0file.cc line 1639
InnoDB: Failing assertion: offset > 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
06:50:44 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
innobackupex(my_print_stacktrace+0x33)[0x881efd3]
innobackupex(handle_fatal_signal+0x262)[0x87b4d92]
[0xb76f3400]
[0xb76f3424]
/lib/i386-linux-gnu/libc.so.6(gsignal+0x4f)[0xb73371df]
/lib/i386-linux-gnu/libc.so.6(abort+0x175)[0xb733a825]
innobackupex[0x82f9f0f]
innobackupex[0x85120f8]
innobackupex[0x8512ccc]
innobackupex(_Z15xb_fil_cur_readP12xb_fil_cur_t+0x2bb)[0x833192b]
innobackupex[0x8321978]
innobackupex[0x832574d]
/lib/i386-linux-gnu/libpthread.so.0(+0x6d4c)[0xb76ccd4c]
/lib/i386-linux-gnu/libc.so.6(clone+0x5e)[0xb73f8bae]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup

tim (tim-hung-dao)
description: updated
tim (tim-hung-dao)
information type: Public → Public Security
information type: Public Security → Public
Revision history for this message
Iavor Stoev (e-admin-team) wrote :

Hello,

We experienced the same issue with xtrabackup version 2.4.4 & Percona Server 5.7.13.
The issue occurs when there is a large more than 20GB table and the xtrabackup binaries are 32bit. Our system setup is based on Debian multiarch and when we deployed 64bit xtrabackup binaries the problem is resolved.

With xtrabackup version 2.3.5 & Percona Server 5.6.31 there is not any issue
with the same data set both with 32bit and 64bit xtrabackup binaries.

I hope that info will be useful in order to fix that case,
and I'm at disposal if you need any additional debug info.

Regards

Revision history for this message
Sergei Glushchenko (sergei.glushchenko) wrote :

Hi Iavor,

How exactly error looks like in your case?

Revision history for this message
Iavor Stoev (e-admin-team) wrote :

Hello Sergei,

This is the error that we get with the 32bit xtrabackup binary:

Using server version 5.7.13-percona-sure1-log
/usr/bin/xtrabackup version 2.4.4 based on MySQL server 5.7.13 Linux (i686) (revision id: df58cf2)

...
160818 22:04:55 >> log scanned up to (58554133693)
160818 22:04:56 >> log scanned up to (58554133693)
160818 22:04:57 >> log scanned up to (58554133693)
2016-08-18 22:04:57 0xe6d4cb40 InnoDB: Assertion failure in thread 3872705344 in file os0file.cc line 1684
InnoDB: Failing assertion: offset > 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
14:04:57 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
/usr/bin/xtrabackup(my_print_stacktrace+0x28)[0x88693c8]
/usr/bin/xtrabackup(handle_fatal_signal+0x224)[0x86735b4]
linux-gate.so.1(__kernel_sigreturn+0x0)[0xeb557410]
linux-gate.so.1(__kernel_vsyscall+0x10)[0xeb557440]
/lib/i386-linux-gnu/i686/cmov/libc.so.6(gsignal+0x47)[0xeb119367]
/lib/i386-linux-gnu/i686/cmov/libc.so.6(abort+0x143)[0xeb11aa23]
/usr/bin/xtrabackup[0x8311d99]
/usr/bin/xtrabackup[0x8606583]
/usr/bin/xtrabackup[0x8606cdb]
/usr/bin/xtrabackup[0x8606e90]
/usr/bin/xtrabackup[0x8607036]
/usr/bin/xtrabackup(_Z17os_file_read_funcR9IORequestiPvym+0x21)[0x8607531]
/usr/bin/xtrabackup(_Z15xb_fil_cur_readP12xb_fil_cur_t+0x286)[0x83486d6]
/usr/bin/xtrabackup[0x8336fe2]
/usr/bin/xtrabackup[0x833e6c8]
/lib/i386-linux-gnu/i686/cmov/libpthread.so.0(+0x6efb)[0xeb527efb]
/lib/i386-linux-gnu/i686/cmov/libc.so.6(clone+0x5e)[0xeb1d6ede]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Could not reproduce with Percona-Server-5.5.51-rel38.1-Linux.i686.ssl100.tar.gz and XB 2.4.4 on Ubuntu 12.04 32 bit. Compiled XB from source.

sh@ubuntu-32bit:~$ sudo /usr/local/xtrabackup/bin/innobackupex --version
innobackupex version 2.4.4 Linux (i686) (revision id: df58cf2)

Available 2 tables on innodb_file_per_table and 2 tables on system tablespace.

No issues with running command:

sudo /usr/local/xtrabackup/bin/innobackupex --defaults-file=/home/sh/sandboxes/msb_5_5_51/my.sandbox.cnf --user=root --password=msandbox --port=5551 --socket=/tmp/mysql_sandbox5551.sock /tmp/backup_dir/

Revision history for this message
Iavor Stoev (e-admin-team) wrote :

I suppose that the issue is related with Percona-Server 5.7.
Could you try to reproduce it with that version?

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Again no issues with Percona-Server-5.7.14-7-Linux.i686.ssl100.tar.gz and compiled Xtrabackup 2.4.4 on Ubuntu 12.04 32 bit:

sudo /usr/local/xtrabackup/bin/innobackupex --defaults-file=/home/sh/sandboxes/msb_5_7_14/my.sandbox.cnf --user=root --password=msandbox --port=5714 --socket=/tmp/mysql_sandbox5714.sock /tmp/backup_dir/

Could you please share your configs? Or maybe some specific conditions that, you think is important on reproducing issue.

Revision history for this message
Sergei Glushchenko (sergei.glushchenko) wrote :

Shako,

I think the condition here is tablespace size > 4G. Did you check that?

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

sh@ubuntu-32bit:~/sandboxes/msb_5_7_14/data/sbtest$ du -hs *
4.0K db.opt
12K sbtest1.frm
4.1G sbtest1.ibd

sudo /usr/local/xtrabackup/bin/innobackupex --defaults-file=/home/sh/sandboxes/msb_5_7_14/my.sandbox.cnf --user=root --password=msandbox --port=5714 --socket=/tmp/mysql_sandbox5714.sock /tmp/backup_dir/

No issues.

Revision history for this message
Iavor Stoev (e-admin-team) wrote :
Download full text (3.4 KiB)

Hello Shahriyar,

I've downgraded the xtrabackup package to 32bit and I'm able to reproduce the issue right away.

It fails with the following error:

160831 11:25:58 >> log scanned up to (1467219360314)
2016-08-31 11:25:58 0xea017b40 InnoDB: Assertion failure in thread 3925965632 in file os0file.cc line 1684
InnoDB: Failing assertion: offset > 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
15:25:58 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
/usr/bin/xtrabackup(my_print_stacktrace+0x28)[0x88693c8]
/usr/bin/xtrabackup(handle_fatal_signal+0x224)[0x86735b4]
linux-gate.so.1(__kernel_sigreturn+0x0)[0xeea20410]
linux-gate.so.1(__kernel_vsyscall+0x10)[0xeea20440]
/lib/i386-linux-gnu/i686/cmov/libc.so.6(gsignal+0x47)[0xee5e1367]
/lib/i386-linux-gnu/i686/cmov/libc.so.6(abort+0x143)[0xee5e2a23]
/usr/bin/xtrabackup[0x8311d99]
/usr/bin/xtrabackup[0x8606583]
/usr/bin/xtrabackup[0x8606cdb]
/usr/bin/xtrabackup[0x8606e90]
/usr/bin/xtrabackup[0x8607036]
/usr/bin/xtrabackup(_Z17os_file_read_funcR9IORequestiPvym+0x21)[0x8607531]
/usr/bin/xtrabackup(_Z15xb_fil_cur_readP12xb_fil_cur_t+0x286)[0x83486d6]
/usr/bin/xtrabackup[0x8336fe2]
/usr/bin/xtrabackup[0x833e6c8]
/lib/i386-linux-gnu/i686/cmov/libpthread.so.0(+0x6efb)[0xee9efefb]
/lib/i386-linux-gnu/i686/cmov/libc.so.6(clone+0x5e)[0xee69eede]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup

The config of the mysqld is:

# The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-external-locking
default-storage-engine = myisam
key_buffer_size = 256M
max_allowed_packet = 32M
table_open_cache = 1024
open_files_limit = 10000
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_stack = 262144
thread_cache_size = 8
query_cache_type = 1
query_cache_size= 256M
query_cache_limit= 2M
table_definition_cache = 2048
temp-pool = 0
max_heap_table_size = 134217728
tmp_table_size = 134217728
max_connections = 300
max_user_connections = 40
max_connect_errors = 100
performance_schema = off
slow_query_log = 1
long-query-time = 0.1
log_error_verbosity=3
secure_file_priv=NULL
sql_mode=NO_ENGINE_SUBSTITUTION
server-id = 1
tmpdir = /usr/local/lib/mysql5/tmp/
innodb_file_per_table = 1
userstat = 1
ft_min_word_len = 3
event_scheduler=OFF

We invoke the xtrabackup with the following options...

Read more...

Revision history for this message
Iavor Stoev (e-admin-team) wrote :

PS:
The mysql server in question has 2 big innodb tables - one 32GB & one 24GB.

Revision history for this message
Jericho Rivera (jericho-rivera) wrote :

170306 22:37:26 >> log scanned up to (44650360123)
170306 22:37:27 >> log scanned up to (44651559526)
170306 22:37:28 >> log scanned up to (44652487366)
2017-03-06 22:37:29 0xf37fab40 InnoDB: Assertion failure in thread 4085230400 in file os0file.cc line 1684
InnoDB: Failing assertion: offset > 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
22:37:29 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x33)[0x88e1ac3]
xtrabackup(handle_fatal_signal+0x262)[0x87681c2]
[0xf77a0bc0]
[0xf77a0be9]
/lib/i386-linux-gnu/libc.so.6(gsignal+0x4f)[0xf71e8e1f]
/lib/i386-linux-gnu/libc.so.6(abort+0x175)[0xf71ec465]
xtrabackup[0x82f24bf]
xtrabackup[0x83d8ba6]
xtrabackup[0x83d972c]
xtrabackup(_Z15xb_fil_cur_readP12xb_fil_cur_t+0x2bb)[0x83381db]
xtrabackup[0x83258d8]
xtrabackup[0x832755d]
/lib/i386-linux-gnu/libpthread.so.0(+0x6d4c)[0xf7784d4c]
/lib/i386-linux-gnu/libc.so.6(clone+0x5e)[0xf72a9e8e]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup

I tested with PXB 2.4.6-2.precise and MySQL server 5.5.54 all 32bit binaries.

Changed in percona-xtrabackup:
status: New → Confirmed
Revision history for this message
Sergei Glushchenko (sergei.glushchenko) wrote :

Probably an issue is the offset argument for "os_file_io_complete". It has ulint type it will hold 32bit on 32bit platforms. It should be replaced with "os_offset_t".

summary: - Fail to simple full backup with innobackupex
+ Xtrabackup 2.4 fails to backup large databases on 32bit platforms
Revision history for this message
Vasily Nemkov (vasily.nemkov) wrote :
Revision history for this message
Vasily Nemkov (vasily.nemkov) wrote :
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXB-481

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.