Restored data leading to infinite "signal 6" crash

Bug #776914 reported by Shlomi Noach on 2011-05-04
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Percona XtraBackup moved to

Bug Description

I have a dataset that consistently cannot be backedup+restored with xtrabackup.
Backup with XtraBackup on master server works; prepare-log works. But when starting meant-to-be slave, it immediately crashes with signal 6, infinitely.

This happens to me on this dataset for several MySQL versions and several XtraBackup versions. Namely, last time:
MySQL is: Percona XtraDB 1.0.8-11.2 (MySQL version 5.1.47-log) - 64bit linux, identical on both machines
OS is Ubuntu Server 10.04 Lucid (64bit)

Tables are in Barracuda, COMPRESSED format (KEY_BLOCK_SIZE=8 for all)
Attached error log; see infinite crash message.
XtraBackup is 1.6-245.lucid, identical on both machines

Shlomi Noach (shlomi-noach) wrote :
Changed in percona-xtrabackup:
importance: Undecided → High
assignee: nobody → Valentine Gostev (core-longbow)
Valentine Gostev (longbow) wrote :

Hi Shlomi,

could you please post the steps we may follow to reproduce this issue?
All xtrabackup commands you ran and how you restored the backup might be helpful here.

Changed in percona-xtrabackup:
status: New → Incomplete
Shlomi Noach (shlomi-noach) wrote :


To back up, I used:

on host00:
$ innobackupex --parallel=3 --throttle=100 --user=temp_backup --password=XXXXX /mnt/host01

I realize --parallel is BETA; I've tried Xtrabackup on this dataset for the last 12 months now; so this happened in the past also without --parallel.

/mnt/host01 is nfs mount

To restore, I issued, on host01:
$ innobackupex --apply-log $(pwd)

So backup and --apply-log are made on two different machines, but with exact same xtrabackup version & same MySQL version.

The --apply-log completed successfully; no sign of problem there.

Changed in percona-xtrabackup:
status: Incomplete → Confirmed
status: Confirmed → New
Stewart Smith (stewart) on 2012-09-04
Changed in percona-xtrabackup:
assignee: Valentine Gostev (longbow) → nobody
Changed in percona-xtrabackup:
importance: High → Undecided
Hitsmetric (evgen-u) wrote :

I have the same situation with restarting of 5.1.67-rel14.3-log Percona Server (GPL), 14.3, Revision 506

What I have in my mysqld.log:

130130 9:48:39 InnoDB: Warning: allocated tablespace 21, old maximum was 9
130130 9:48:39 InnoDB: Assertion failure in thread 140639204140800 in file btr/btr0cur.c line 321
InnoDB: Failing assertion: btr_page_get_prev(get_block->frame, mtr) == page_get_page_no(page)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to .
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to

InnoDB: about forcing recovery.
15:48:39 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 16593237 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x80000
You may download the Percona Server operations manual by visiting
. You may find information
in the manual which will help you identify the cause of the crash.
130130 09:48:39 mysqld_safe Number of processes running now: 0
130130 09:48:39 mysqld_safe mysqld restarted

I have opened a topic
with more information about my env and issue

Alexey Kopytov (akopytov) wrote :

I see multiple errors in the attached log that look like the following:
110503 19:37:44 InnoDB: Error: page 4 log sequence number 8299215792744
InnoDB: is in the future! Current system log sequence number 8298149359628.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: for more information.

Which indicates there's an inconsistency between the log files and the data files. The actual assertion failure resulting in a crash may have the similar nature.

Is this problem still repeatable? If so, how the backup is restored? Is there any chance that the data directory on the server contains both new files (i.e. restored from the backup) and old ones (i.e. left from the previously running server instance)?

Changed in percona-xtrabackup:
status: New → Incomplete
Launchpad Janitor (janitor) wrote :

[Expired for Percona XtraBackup because there has been no activity for 60 days.]

Changed in percona-xtrabackup:
status: Incomplete → Expired

Percona now uses JIRA for bug reports so this bug report is migrated to:

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments