crash on quick rm table when optimizing table.

Bug #897258 reported by Dave Juntgen on 2011-11-28
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Percona Server moved to https://jira.percona.com/projects/PS
Incomplete
High
Unassigned
5.5
Incomplete
High
Unassigned

Bug Description

While running:

mysqlcheck -o --user=root -p --all-databases --verbose

MySQL crashed with signal 6.

SERVER:
Linux wc-c1-db2.med-web.com 2.6.35.12.i.core2 #1 SMP Fri Apr 8 10:10:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
mysql Ver 14.14 Distrib 5.1.56, for unknown-linux-gnu (x86_64) using readline 5.1

============================
InnoDB: Error: Waited for 600 secs for hash index ref_count (1) to drop to 0.
index: "eval_datetime" table: "wc_acj/#sql2-1239-12d9"
111128 10:13:01 InnoDB: Assertion failure in thread 140040330925840 in file dict/dict0dict.c line 1821
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
111128 10:13:01 - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=1073741824
read_buffer_size=131072
max_used_connections=4
max_threads=500
threads_connected=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2142013 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x2c9bdaa0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f5dae2dce98 thread_stack 0x40000
/usr/local/mysql/libexec/mysqld(my_print_stacktrace+0x39)[0x87a309]
/usr/local/mysql/libexec/mysqld(handle_segfault+0x34e)[0x5ae1ee]
/lib64/libpthread.so.0(+0xf0f0)[0x7f5dafdf10f0]
/lib64/libc.so.6(gsignal+0x35)[0x7f5daefbb6c5]
/lib64/libc.so.6(abort+0x175)[0x7f5daefbcea5]
/usr/local/mysql/libexec/mysqld[0x7e7d6c]
/usr/local/mysql/libexec/mysqld[0x7ed00a]
/usr/local/mysql/libexec/mysqld[0x774c29]
/usr/local/mysql/libexec/mysqld[0x71af3d]
/usr/local/mysql/libexec/mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0xd4)[0x691994]
/usr/local/mysql/libexec/mysqld(_Z14quick_rm_tableP10handlertonPKcS2_j+0xa9)[0x697719]
/usr/local/mysql/libexec/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x17b4)[0x69e204]
/usr/local/mysql/libexec/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x19b)[0x69ff7b]
/usr/local/mysql/libexec/mysqld[0x6a0fec]
/usr/local/mysql/libexec/mysqld(_Z20mysql_optimize_tableP3THDP10TABLE_LISTP15st_ha_check_opt+0x61)[0x6a1ab1]
/usr/local/mysql/libexec/mysqld(_Z21mysql_execute_commandP3THD+0x43fb)[0x5be4ab]
/usr/local/mysql/libexec/mysqld(_Z11mysql_parseP3THDPcjPPKc+0x51b)[0x5c067b]
/usr/local/mysql/libexec/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x9f1)[0x5c1071]
/usr/local/mysql/libexec/mysqld(_Z10do_commandP3THD+0x128)[0x5c1cb8]
/usr/local/mysql/libexec/mysqld(handle_one_connection+0x7b1)[0x5b3ca1]
/lib64/libpthread.so.0(+0x6a3a)[0x7f5dafde8a3a]
/lib64/libc.so.6(clone+0x6d)[0x7f5daf06767d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7f5600019a70): is an invalid pointer
Connection ID (thread ID): 4825
Status: NOT_KILLED

Revision history for this message
Lixun Peng (P.Linux) (plx) wrote :

Maybe you add can add "err = DB_ERROR;" on function row_rename_table_for_mysql of row0mysql.c.

search "if (!dict_table_rename_in_cache(table, new_name" on "storage/innodb_plugin/row/row0mysql.c", you can find this code, and add "err = DB_ERROR;" before " goto funct_exit;".

        if (!dict_table_rename_in_cache(table, new_name,
                        !new_is_tmp)) {
            trx->error_state = DB_SUCCESS;
            trx_general_rollback_for_mysql(trx, NULL);
            trx->error_state = DB_SUCCESS;
            err = DB_ERROR;
            goto funct_exit;
        }

The rollback code(on funct_exit) does not real execute because not set err = DB_ERROR after the failure of function dict_table_rename_in_cache(), add "err = DB_ERROR" before "goto funct_exit;" to enable the rollback logic on funct_exit.

Revision history for this message
Lixun Peng (P.Linux) (plx) wrote :

Sorry, this bug is not all the same with we affects.

Revision history for this message
Lixun Peng (P.Linux) (plx) wrote :

I check about "dict_index_remove_from_cache", "dict_create_index_tree_step". I think this error is fine, no bugs in code.

If you CREATE INDEX on table, XtraDB call "dict_create_index_tree_step" to create index. And if "dict_create_index_tree_step" failed, then XtraDB will call dict_index_remove_from_cache() to free dictionary cache which include index built information, ref_count is the number of blocks in this index tree that have search index built. Only if ref_count=0, the index is removed and the dictionary cache can be free.

And why your dict_create_index_tree_step failed? Only this statement can return a error value.

    if (node->page_no == FIL_NULL) {

        return(DB_OUT_OF_FILE_SPACE);
    }

I think your data disk if full.

Revision history for this message
Dave Juntgen (djuntgen) wrote :

Disk is not full.

Revision history for this message
Dave Juntgen (djuntgen) wrote :

I believe the issue hovers around the innodb_lazy_drop_table option. I've turn this off and re ran the same command, the database is quite large and the optimization is still running without a crash.

As the error points out:

InnoDB: Error: Waited for 600 secs for hash index ref_count (1) to drop to 0.
index: "eval_datetime" table: "wc_acj/#sql2-1239-12d9"
111128 10:13:01 InnoDB: Assertion failure in thread 140040330925840 in file dict/dict0dict.c line 1821

here is the code from dict0dict.c line 1818 - 1822

  /* To avoid a hang here we commit suicide if the
  ref_count doesn't drop to zero in 600 seconds. */
  if (retries >= 60000) {
   ut_error;
  }

so - ut_error just generates an assertion, at this point i'm not sure what happens.

Revision history for this message
Lixun Peng (P.Linux) (plx) wrote :

IF you set innodb_lazy_drop_table = 1, DROP TABLE "fil_delete_tablespace" will call buf_LRU_mark_space_was_deleted() to mark all pages of tablespace_id=your_drop_tablespace_id with "bpage->space_was_being_deleted = TRUE;".

And this operation will get "btr_search_latch"

    rw_lock_s_lock(&btr_search_latch);
    chunk = buf_pool->chunks;
    for (i = buf_pool->n_chunks; i--; chunk++) {
        buf_block_t* block = chunk->blocks;
        for (j = chunk->size; j--; block++) {
            if (buf_block_get_state(block)
                != BUF_BLOCK_FILE_PAGE
                || !block->is_hashed
                || buf_page_get_space(&block->page) != id) {
                continue;
            }

            rw_lock_s_unlock(&btr_search_latch);

            rw_lock_x_lock(&block->lock);
            btr_search_drop_page_hash_index(block);
            rw_lock_x_unlock(&block->lock);

            rw_lock_s_lock(&btr_search_latch);
        }
    }
    rw_lock_s_unlock(&btr_search_latch);

And in "dict_index_remove_from_cache" will call "btr_search_drop_page_hash_index_on_index()", this function must get rw_lock_x_lock(&btr_search_latch) to remove hash index page let index->search_info->ref_count--.

So, I thing this is the reason.

If this operation lock by "rw_lock_x_lock(&btr_search_latch);", then

Revision history for this message
Patrick Crews (patrick-crews) wrote :

Can you provide a general description of your database:
* Number / composition of tables (anything to get us pointed in the right direction)
* Is the server under any load during the mysqlcheck process? If so, could you describe it (writes, reads, etc)

My initial tests with just random data + mysqlcheck did not indicate any issues, but that just means I'm missing the triggers ; )

Revision history for this message
Dave Juntgen (djuntgen) wrote :

Hi Patrick,

* Lots of data, here is the break down.

- 410 databases.
- 191,460 tables or about ~400 a database. Majority are innodb, but there are a handful of MyISAM.

* This sever is a slave server replicating for the master. So, activity is low, just the two replication threads.

Stewart Smith (stewart) on 2012-02-07
Changed in percona-server:
importance: Undecided → High
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

We have a (large) test case for this bug and were able to reproduce it.

Most likely this bug was caused by incomplete removal of buffer pool pages for the tablespace being truncated from the LRU/flush lists, that caused the corresponding adaptive hash index entries to stay too.

MySQL 5.5.23 rewrote the code in question while fixing http://bugs.mysql.com/bug.php?id=64284 (the fix is rev 3768) and we are not able to reproduce it there.

We will re-test with Percona Server 5.5.23 to confirm that it's really fixed.

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

This is supposed to be fixed in upstream 5.5.23, no need to postpone it to .24 now.

Revision history for this message
Stewart Smith (stewart) wrote :

I'm going to set this to Incomplete, and it will auto-expire if we don't provide more information, i.e. reproduce with Percona Server 5.5.23.

Revision history for this message
Roel Van de Paar (roel11) wrote :

Bug resurfaced in Percona-Server-5.5.28-rel29.1-359.Linux.x86_64. New bug report @ bug 1086227.

Revision history for this message
Roel Van de Paar (roel11) wrote :

As this bug was never fully resolved, will make this a duplicate of the new one.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.