Restored data leading to infinite "signal 6" crash

Bug #776914 reported by Shlomi Noach
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Percona XtraBackup moved to https://jira.percona.com/projects/PXB
Expired
Undecided
Unassigned

Bug Description

I have a dataset that consistently cannot be backedup+restored with xtrabackup.
Backup with XtraBackup on master server works; prepare-log works. But when starting meant-to-be slave, it immediately crashes with signal 6, infinitely.

This happens to me on this dataset for several MySQL versions and several XtraBackup versions. Namely, last time:
MySQL is: Percona XtraDB 1.0.8-11.2 (MySQL version 5.1.47-log) - 64bit linux, identical on both machines
OS is Ubuntu Server 10.04 Lucid (64bit)

Tables are in Barracuda, COMPRESSED format (KEY_BLOCK_SIZE=8 for all)
Attached error log; see infinite crash message.
XtraBackup is 1.6-245.lucid, identical on both machines

Revision history for this message
Shlomi Noach (shlomi-noach) wrote :
Changed in percona-xtrabackup:
importance: Undecided → High
assignee: nobody → Valentine Gostev (core-longbow)
Revision history for this message
Valentine Gostev (longbow) wrote :

Hi Shlomi,

could you please post the steps we may follow to reproduce this issue?
All xtrabackup commands you ran and how you restored the backup might be helpful here.

Changed in percona-xtrabackup:
status: New → Incomplete
Revision history for this message
Shlomi Noach (shlomi-noach) wrote :

Sure.

To back up, I used:

on host00:
$ innobackupex --parallel=3 --throttle=100 --user=temp_backup --password=XXXXX /mnt/host01

I realize --parallel is BETA; I've tried Xtrabackup on this dataset for the last 12 months now; so this happened in the past also without --parallel.

/mnt/host01 is nfs mount

To restore, I issued, on host01:
$ innobackupex --apply-log $(pwd)

So backup and --apply-log are made on two different machines, but with exact same xtrabackup version & same MySQL version.

The --apply-log completed successfully; no sign of problem there.

Changed in percona-xtrabackup:
status: Incomplete → Confirmed
status: Confirmed → New
Stewart Smith (stewart)
Changed in percona-xtrabackup:
assignee: Valentine Gostev (longbow) → nobody
Changed in percona-xtrabackup:
importance: High → Undecided
Revision history for this message
Hitsmetric (evgen-u) wrote :

I have the same situation with restarting of 5.1.67-rel14.3-log Percona Server (GPL), 14.3, Revision 506

What I have in my mysqld.log:

130130 9:48:39 InnoDB: Warning: allocated tablespace 21, old maximum was 9
130130 9:48:39 InnoDB: Assertion failure in thread 140639204140800 in file btr/btr0cur.c line 321
InnoDB: Failing assertion: btr_page_get_prev(get_block->frame, mtr) == page_get_page_no(page)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to .
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to

InnoDB: about forcing recovery.
15:48:39 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at

key_buffer_size=209715200
read_buffer_size=8388608
max_used_connections=0
max_threads=400
thread_count=0
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 16593237 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x80000
/usr/sbin/mysqld(my_print_stacktrace+0x35)[0x8ac755]
/usr/sbin/mysqld(handle_fatal_signal+0x40b)[0x6a862b]
/lib64/libpthread.so.0(+0xf500)[0x7feac550e500]
/lib64/libc.so.6(gsignal+0x35)[0x7feac46be8a5]
/lib64/libc.so.6(abort+0x175)[0x7feac46c0085]
/usr/sbin/mysqld[0x7ed8bd]
/usr/sbin/mysqld[0x7f2ad0]
/usr/sbin/mysqld[0x7a0c68]
/usr/sbin/mysqld[0x79f502]
/usr/sbin/mysqld[0x79fa16]
/usr/sbin/mysqld[0x7a0558]
/usr/sbin/mysqld[0x787680]
/usr/sbin/mysqld[0x7bf3f6]
/usr/sbin/mysqld[0x7b727b]
/lib64/libpthread.so.0(+0x7851)[0x7feac5506851]
/lib64/libc.so.6(clone+0x6d)[0x7feac477411d]
You may download the Percona Server operations manual by visiting
. You may find information
in the manual which will help you identify the cause of the crash.
130130 09:48:39 mysqld_safe Number of processes running now: 0
130130 09:48:39 mysqld_safe mysqld restarted

I have opened a topic http://www.perconaforum.com/index.php?t=msg&goto=10374
with more information about my env and issue

Revision history for this message
Alexey Kopytov (akopytov) wrote :

I see multiple errors in the attached log that look like the following:
"
110503 19:37:44 InnoDB: Error: page 4 log sequence number 8299215792744
InnoDB: is in the future! Current system log sequence number 8298149359628.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
InnoDB: for more information.
"

Which indicates there's an inconsistency between the log files and the data files. The actual assertion failure resulting in a crash may have the similar nature.

Is this problem still repeatable? If so, how the backup is restored? Is there any chance that the data directory on the server contains both new files (i.e. restored from the backup) and old ones (i.e. left from the previously running server instance)?

Changed in percona-xtrabackup:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for Percona XtraBackup because there has been no activity for 60 days.]

Changed in percona-xtrabackup:
status: Incomplete → Expired
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXB-1121

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.