failing assertion during prepare stage: page0cur.c line 928

Bug #1250643 reported by Fernando Ipar
32
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Percona XtraBackup moved to https://jira.percona.com/projects/PXB
New
Undecided
Unassigned

Bug Description

Xtrabackup crashes some times during the --prepare phase, with this assertion failure:

InnoDB: Doing recovery: scanned up to log sequence number 10545208875520 (23 %)
131111 3:08:42 InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 131111 3:08:45 InnoDB: Assertion failure i
n thread 19 in file page0cur.c line 928
InnoDB: We intentionally generate a memory trap.

Tags: i36850 i36945
Fernando Ipar (fipar)
affects: percona-server → percona-xtrabackup
Fernando Ipar (fipar)
tags: added: i36945
Revision history for this message
Nilnandan Joshi (nilnandan-joshi) wrote :

Hi Fernando,

Can you please give us more information about Xtrabackup and MySQL versions? Also can you provide detailed testcase that how we can reproduce it? thanks.

Changed in percona-xtrabackup:
status: New → Incomplete
Revision history for this message
Fernando Ipar (fipar) wrote :

Nil: I'm sorry I did not include that in the original report, not sure why that happened.
I no longer have access to this system so I can't provide a test case.

Thanks.

Revision history for this message
Nilnandan Joshi (nilnandan-joshi) wrote :

There is no repeatable test case/enough details to proceed.

Changed in percona-xtrabackup:
status: Incomplete → Invalid
Revision history for this message
Rick Pizzi (pizzi) wrote :

Just happened to me using latest xtrabackup release:

InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 2014-08-20 20:03:44 2b8d5826d940 InnoDB: Assertion failure in thread 47886069324096 in file page0cur.cc line 931
InnoDB: We intentionally generate a memory trap.

 xtrabackup version 2.2.3 based on MySQL server 5.6.17 Linux (x86_64) (revision id: )
Server version: 5.5.33-31.1-log Percona Server (GPL), Release rel31.1, Revision 566

Revision history for this message
Rick Pizzi (pizzi) wrote :

Here is the assertion code:

      if (UNIV_UNLIKELY(!page_cur_rec_insert(&cursor,
                                               buf + origin_offset,
                                               index, offsets, mtr))) {
                /* The redo log record should only have been written
                after the write was successful. */
                ut_error;
        }

Rick Pizzi (pizzi)
Changed in percona-xtrabackup:
status: Invalid → Confirmed
Changed in percona-xtrabackup:
status: Confirmed → New
Revision history for this message
vincent.seguin (vincent-seguin) wrote :

I just ran into the same issue.

This is percona-xtrabackup-2.2.10-1
Backup was generated using 2.2.6
Mysql version is 5.5.36
This happened restoring a very large differential backup, during the differential apply-log phase (base was fine).

Log info is:
....
InnoDB: Doing recovery: scanned up to log sequence number 47015087138304 (67%)
InnoDB: Doing recovery: scanned up to log sequence number 47015092381184 (67%)
InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 2015-04-16 01:00:22 7fcbe58e0700 InnoDB: Assertion failure in thread 140513706379008 in file page0cur.cc line 931
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
08:00:22 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x35) [0x9f5331]
xtrabackup(handle_fatal_signal+0x2bb) [0x7f801b]
/lib64/libpthread.so.0(+0xf710) [0x7fcbe7f6f710]
/lib64/libc.so.6(gsignal+0x35) [0x7fcbe658b925]
/lib64/libc.so.6(abort+0x175) [0x7fcbe658d105]
xtrabackup(page_cur_parse_insert_rec(unsigned long, unsigned char*, unsigned char*, buf_block_t*, dict_index_t*, mtr_t*)+0x662) [0x6eedc5]
xtrabackup() [0x75000d]
xtrabackup(recv_recover_page_func(unsigned long, buf_block_t*)+0x3c5) [0x751231]
xtrabackup(buf_page_io_complete(buf_page_t*)+0x449) [0x6a4dd9]
xtrabackup(fil_aio_wait(unsigned long)+0x14d) [0x68b480]
xtrabackup(io_handler_thread+0x25) [0x725ee6]
/lib64/libpthread.so.0(+0x79d1) [0x7fcbe7f679d1]
/lib64/libc.so.6(clone+0x6d) [0x7fcbe6641b6d]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2641
        main::apply_log() called at /usr/bin/innobackupex line 1570
innobackupex: Error:
innobackupex: ibbackup failed at /usr/bin/innobackupex line 2641.

Revision history for this message
Tomek (lessmian) wrote :
Download full text (3.5 KiB)

Hi. I have exactly the same problem. Backup was generated with percona-xtrabackup 2.1.9, restored with 2.1.8-1.

Log:
InnoDB: Doing recovery: scanned up to log sequence number 518996400128 (58%)
InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: Doing recovery: scanned up to log sequence number 519001643008 (66%)
InnoDB: Doing recovery: scanned up to log sequence number 519006885888 (75%)
InnoDB: Doing recovery: scanned up to log sequence number 519012128768 (83%)
InnoDB: Doing recovery: scanned up to log sequence number 519017371648 (91%)
InnoDB: Doing recovery: scanned up to log sequence number 519022509911 (99%)
2015-05-26 11:28:24 7f8027043780 InnoDB: Assertion failure in thread 140188387129216 in file page0cur.cc line 931
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
09:28:24 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Thread pointer: 0x2638eb0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x2e) [0x9351ce]
xtrabackup(handle_fatal_signal+0x262) [0x8f5e42]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7f8026c25340]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7f8025389cc9]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7f802538d0d8]
xtrabackup(page_cur_parse_insert_rec(unsigned long, unsigned char*, unsigned char*, buf_block_t*, dict_index_t*, mtr_t*)+0x8d5) [0x9a8d05]
xtrabackup() [0x5c2b3c]
xtrabackup(recv_recover_page_func(unsigned long, buf_block_t*)+0x509) [0x5c46a9]
xtrabackup(buf_page_io_complete(buf_page_t*)+0x410) [0x6bdcf0]
xtrabackup(buf_read_page(unsigned long, unsigned long, unsigned long)+0x188) [0x69e6d8]
xtrabackup(buf_page_get_gen(unsigned long, unsigned long, unsigned long, unsigned long, buf_block_t*, unsigned long, char const*, unsigned long, mtr_t*)+0x230) [0x6badf0]
xtrabackup(ibuf_init_at_db_start()+0x2e5) [0x5eb3f5]
xtrabackup(dict_boot()+0x95f) [0x6b2a3f]
xtrabackup(innobase_start_or_create_for_mysql()+0x1604) [0x657ec4]
xtrabackup() [0x5910e9]
xtrabackup(main+0x1528) [0x57bc98]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) ...

Read more...

Revision history for this message
vincent.seguin (vincent-seguin) wrote :

New occurrence this morning.
Similar trace/logs.

Differential backup taken with xtrabackup 2.2.12 from mysql 5.5.36-log.
Restore attempt with:

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
xtrabackup version 2.2.12 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 8726828)
Server version: 5.5.36-log MySQL Community Server (GPL)

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Could not reproduce with XB 2.3.3 and 2.2.12 with MySQL 5.5.47.
What is the my.cnf and exact XB command?

Revision history for this message
vincent.seguin (vincent-seguin) wrote :

@Shahriyar I don't think you'll be able to reproduce easily without the actual database

Out of >800 recent restores for this particular version combination (different databases, with similar my.cnf), I have 3 confirmed "bad" backups that can't be restored with that problem.

The only thing I know is that:
1. I have seen it on mysql 5.5.36 only
2. It doesn't appear to occur on 5.6 (>1600 restores with no issue)
3. I have seen it with xtrabackup 2.2.10 and 2.2.12
4. It only occurs on differential backups
5. The 3 cases I've seen are on large databases (>2T database size)
6. In the most recent case, two successive differentials of the same DB (same base LSN) exhibit the same problem
7. A new full backup, then differential on the same DB can be restored just fine (confirming in the next couple days)

The command that fails is:

innobackupex --apply-log --incremental-dir /glide/mysqld/aztemptesting_3400_giga/temp/restore_2016-02-01_1336089/azprod_3403_db160098_37-2_2016-
01-31_1844124 --use-memory=32G /glide/mysqld/aztemptesting_3400_giga/temp/restore_2016-02-01_1336089/azprod_3403_db160098_37_2016-01-29_1844126 2> /glide/backup/log/restore.2
016-02-01_1336089.azprod_3403_db160098_37-2_2016-01-31_1844124.log

Revision history for this message
Sveta Smirnova (svetasmirnova) wrote :

Which filesystem for backup and datadir do you use?

Changed in percona-xtrabackup:
status: New → Incomplete
Revision history for this message
vincent.seguin (vincent-seguin) wrote :

XFS

Changed in percona-xtrabackup:
status: Incomplete → New
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXB-1262

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.