innodb main thread locks itself in &dict_operation_lock

Bug #1500176 reported by Rick Pizzi
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Percona Server moved to https://jira.percona.com/projects/PS
New
Undecided
Unassigned

Bug Description

Yesterday our master got a severe stall which ended in InnoDB committing suicide with an assertion:

InnoDB: Error: semaphore wait has lasted > 600 seconds
InnoDB: We intentionally crash the server, because it appears to be hung.
2015-09-26 20:54:46 7efda0ca3700 InnoDB: Assertion failure in thread 139627789432576 in file srv0srv.cc line 2128

The InnoDB engine status dump (attached to this bug) shows entire INNODB subsystem locked , with main thread as the blocker, which seem stuck in state "enforcing dict cache limit"

Looking at the error log, what puzzles me is the following (139627810412288 is the main thread):

--Thread 139627810412288 has waited at srv0srv.cc line 2596 for 241.00 seconds the semaphore:
X-lock (wait_ex) on RW-latch at 0x134c460 '&dict_operation_lock'
a writer (thread id 139627810412288) has reserved it in mode wait exclusive
number of readers 7, waiters flag 1, lock_word: fffffffffffffff9
Last time read locked in file row0ins.cc line 1803
Last time write locked in file /mnt/workspace/percona-server-5.6-redhat-binary/label_exp/centos6-64/rpmbuild/BUILD/percona-server-5.6.25-73.1/storage/innobase/dict/dict0stats.cc line 2385

From the above it seems that the blocker was blocked by itself. This seems quite odd!!
The thread already owns &dict_operation_lock but wants to get it twice. That's a deadlock.

Running Percona Server 5.6.25-73.1-log . my.cnf and error log attached.

Additional info which may help troubleshooting:
At the time the problem occurred there was an online schema change running and about to complete.
The table being altered was a very large (1 billion rows) partitioned one, it is likely that the OSC tool was about to execute or executing the atomical rename of tables, though this is just a speculation as the operation never completed, and therefore is not logged in binary logs or anywhere else.

Thanks
Rick

Tags: innodb
Revision history for this message
Rick Pizzi (pizzi) wrote :
Revision history for this message
Rick Pizzi (pizzi) wrote :
Download full text (32.4 KiB)

Today we retried the online schema change operation, after trimming down the partitioned table to just 3 days worth of data (50M rows approx). The server stalled again when the OSC was about to complete in the same exact way:

--Thread 140222643808000 has waited at dict0stats.cc line 2385 for 23.000 seconds the semaphore:
X-lock (wait_ex) on RW-latch at 0x134c460 '&dict_operation_lock'
a writer (thread id 140222643808000) has reserved it in mode wait exclusive
number of readers 34, waiters flag 1, lock_word: ffffffffffffffde
Last time read locked in file row0undo.cc line 298
Last time write locked in file /mnt/workspace/percona-server-5.6-redhat-binary/label_exp/centos6-64/rpmbuild/BUILD/percona-server-5.6.25-73.1/storage/innobase/dict/dict0stats.cc line 2385

We suspect that the problem is tied to a conflict between rows inserted in the current partitionof _new table by OSC process and the insert trigger. Everything is smooth until the end, when the inserted row ID from original table approaches the value of the current row ID (which is auto increment) inserted from trigger.

This time we were able to run a pt-pmp:

    647 pthread_cond_wait,os_cond_wait(os0sync.cc:214),os_event_wait_low(os0sync.cc:214),sync_array_wait_event(sync0arr.cc:433),rw_lock_s_lock_spin(sync0rw.cc:550),rw_lock_s_lock_func(sync0rw.ic:426),pfs_rw_lock_s_lock_func(sync0rw.ic:426),row_mysql_freeze_data_dictionary_func(sync0rw.ic:426),row_undo(row0undo.cc:298),row_undo_step(row0undo.cc:298),que_thr_step(que0que.cc:1087),que_run_threads_low(que0que.cc:1087),que_run_threads(que0que.cc:1087),trx_rollback_to_savepoint_low(trx0roll.cc:112),trx_rollback_to_savepoint(trx0roll.cc:154),row_mysql_handle_errors(row0mysql.cc:651),row_lock_table_autoinc_for_mysql(row0mysql.cc:651),ha_innobase::innobase_lock_autoinc(ha_innodb.cc:7072),innobase_set_max_autoinc(ha_innodb.cc:7123),ha_innobase::write_row(ha_innodb.cc:7123),handler::ha_write_row(handler.cc:7679),ha_partition::write_row(ha_partition.cc:4010),handler::ha_write_row(handler.cc:7679),write_record(sql_insert.cc:1680),mysql_insert(sql_insert.cc:1085),mysql_execute_command(sql_parse.cc:3892),audit_mysql_execute_command,sp_instr_stmt::exec_core(sp_instr.cc:937),sp_lex_instr::reset_lex_and_exec_core(sp_instr.cc:391),sp_lex_instr::validate_lex_and_execute_core(sp_instr.cc:642),sp_instr_stmt::execute(sp_instr.cc:825),sp_head::execute(sp_head.cc:648),sp_head::execute_trigger(sp_head.cc:895),Table_triggers_list::process_triggers(sql_trigger.cc:2184),write_record(sql_insert.cc:1962),mysql_insert(sql_insert.cc:1085),mysql_execute_command(sql_parse.cc:3892),audit_mysql_execute_command,mysql_parse(sql_parse.cc:6950),dispatch_command(sql_parse.cc:1442),do_handle_one_connection(sql_connect.cc:1532),handle_one_connection(sql_connect.cc:1443),start_thread(libpthread.so.0),clone(libc.so.6)
    151 poll(libc.so.6),vio_io_wait(viosocket.c:1136),vio_socket_io_wait(viosocket.c:104),vio_read(viosocket.c:159),net_read_raw_loop(net_serv.cc:669),net_read_packet_header(net_serv.cc:751),net_read_packet(net_serv.cc:751),my_net_read(net_serv.cc:751),do_command(sql_parse.cc:987),do_handle_one_connection(sql_connect.cc:1532),handle_one_connection(sql_conn...

Revision history for this message
Rick Pizzi (pizzi) wrote :
Revision history for this message
Rick Pizzi (pizzi) wrote :
Download full text (3.4 KiB)

Create table of the table we were trying to alter:

Create Table: CREATE TABLE `FLIGHT_REQUEST_NEW` (
  `ID_REQUEST` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Id ricerca',
  `REQUEST_DATE` datetime NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'Search Date',
  `DEPARTURE_AIRPORT` char(3) NOT NULL COMMENT 'Departure Airport',
  `ARRIVAL_AIRPORT` char(3) NOT NULL COMMENT 'Destination Airport',
  `OUTBOUND_DATE` date NOT NULL DEFAULT '0000-00-00' COMMENT 'Departure Date',
  `RETURN_DATE` date DEFAULT NULL COMMENT 'Data volo di ritorno',
  `CLASS_FLIGHT` char(1) DEFAULT NULL COMMENT 'Flight Class',
  `FARE_TYPES` varchar(40) DEFAULT NULL,
  `EXTERNAL_ID` bigint(20) DEFAULT NULL COMMENT 'external search id',
  `EXTERNAL_SOURCE` char(3) DEFAULT NULL COMMENT 'external search type',
  `RESTRICTION` char(1) DEFAULT NULL COMMENT 'Null=Normale; W=No risultati perche fuori da whitelist; B=No risultati perch? rotta in blacklist',
  `PAYMENT_METHOD` varchar(5) DEFAULT NULL,
  `ADULTS` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'Adults',
  `CHILDS` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'Children',
  `INFANTS` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'Babys',
  `ID_BUSINESS_PROFILE` varchar(30) NOT NULL COMMENT 'Profile',
  `EMAIL` varchar(100) DEFAULT NULL COMMENT 'Email',
  `UID` varchar(40) NOT NULL COMMENT 'Unique ID',
  `LANGUAGE` char(5) NOT NULL COMMENT 'Language',
  `IP_ADDRESS` varchar(20) NOT NULL COMMENT 'User IP',
  `SOURCE` varchar(5) DEFAULT NULL COMMENT 'Flight request Origin',
  `SERVER` varchar(128) DEFAULT NULL COMMENT 'Identification server',
  `BF_SOURCE` varchar(100) DEFAULT NULL,
  `BF_SUBSOURCE` varchar(100) DEFAULT NULL,
  `PROFILE` char(1) DEFAULT NULL,
  `APP_SERVER_SESSION_ID` varchar(50) DEFAULT NULL COMMENT 'JSession Id',
  `COUNTRY_ID` char(3) DEFAULT NULL COMMENT 'Country code obtained by Ip',
  `CURRENCY` char(3) DEFAULT NULL COMMENT 'Currency',
  PRIMARY KEY (`ID_REQUEST`,`REQUEST_DATE`),
  KEY `EMAIL` (`EMAIL`),
  KEY `UID` (`UID`),
  KEY `REQUEST_DATE` (`REQUEST_DATE`),
  KEY `IP_ADDRESS` (`IP_ADDRESS`),
  KEY `LANGUAGE` (`LANGUAGE`)
) ENGINE=InnoDB AUTO_INCREMENT=15167586660 DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED COMMENT='User Searches'
/*!50100 PARTITION BY RANGE (TO_DAYS(REQUEST_DATE))
(PARTITION p237 VALUES LESS THAN (736232) ENGINE = InnoDB,
 PARTITION p238 VALUES LESS THAN (736233) ENGINE = InnoDB,
 PARTITION p239 VALUES LESS THAN (736234) ENGINE = InnoDB,
 PARTITION p240 VALUES LESS THAN (736235) ENGINE = InnoDB,
 PARTITION p241 VALUES LESS THAN (736236) ENGINE = InnoDB,
 PARTITION p242 VALUES LESS THAN (736237) ENGINE = InnoDB,
 PARTITION p243 VALUES LESS THAN (736238) ENGINE = InnoDB,
 PARTITION p244 VALUES LESS THAN (736239) ENGINE = InnoDB,
 PARTITION p245 VALUES LESS THAN (736240) ENGINE = InnoDB,
 PARTITION p246 VALUES LESS THAN (736241) ENGINE = InnoDB,
 PARTITION p247 VALUES LESS THAN (736242) ENGINE = InnoDB,
 PARTITION p248 VALUES LESS THAN (736243) ENGINE = InnoDB,
 PARTITION p249 VALUES LESS THAN (736244) ENGINE = InnoDB,
 PARTITION p250 VALUES LESS THAN (736245) ENGINE = InnoDB,
 PARTITION p251 VALUES LESS THAN (736246) ENGINE = InnoDB,
 PARTITION p252 VALUES LESS THAN (...

Read more...

Revision history for this message
Rick Pizzi (pizzi) wrote :

Further analysis:

from above stack trace, looking at function row_lock_table_autoinc_for_mysql (row0mysql.cc:651):

      case DB_LOCK_TABLE_FULL:
                /* Roll back the whole transaction; this resolution was added
                to version 3.23.43 */

                trx_rollback_to_savepoint(trx, NULL);

So, it looks like we hit some limit with locks?

       DB_LOCK_TABLE_FULL, /*!< lock structs have exhausted the
                                        buffer pool (for big transactions,
                                        InnoDB stores the lock structs in the
                                        buffer pool) */

Revision history for this message
Rick Pizzi (pizzi) wrote :

Following the code, it seems that innodb returns DB_LOCK_TABLE_FULL when less than 1/4 of any buffer pool instance is free.
We have 8 x 20 GB data buffer pools. 25% could have been a good value maybe 10 years ago.... unless I'm missing something...

Revision history for this message
markus_albe (markus-albe) wrote :
Download full text (711.2 KiB)

I see this happening more or less often, with similar patterns:

# Main thread always seen in "enforcing dict cache limit"
Main thread process no. 51183, id 139748284823296, state: enforcing dict cache limit
Number of rows inserted 353327340, updated 1235530511, deleted 8112534, read 319100849486
0.00 inserts/s, 0.33 updates/s, 0.00 deletes/s, 54.41 reads/s

Main thread in SEMAPHORES section always waiting on itself on dict_operation_lock
--Thread 139748284823296 has waited at srv0srv.cc line 2654 for 214.00 seconds the semaphore:
X-lock (wait_ex) on RW-latch at 0x13890a0 '&dict_operation_lock'
a writer (thread id 139748284823296) has reserved it in mode wait exclusive
number of readers 101, waiters flag 1, lock_word: ffffffffffffff9b
Last time read locked in file row0upd.cc line 149
Last time write locked in file /var/builds/workspace/155687-v3-component/BUILD_CONTAINER/rhel6/label/DOCKER-LOW/app_root/x86_64-linux-gcc/percona-server-5.6.34-79.1/storage/innobase/srv/srv0srv.cc line 2654

For reference, one of many threads waiting for main thread...
--Thread 139705061721856 has waited at row0upd.cc line 149 for 178.00 seconds the semaphore:
S-lock on RW-latch at 0x13890a0 '&dict_operation_lock'
a writer (thread id 139748284823296) has reserved it in mode wait exclusive
number of readers 101, waiters flag 1, lock_word: ffffffffffffff9b
Last time read locked in file row0upd.cc line 149
Last time write locked in file /var/builds/workspace/155687-v3-component/BUILD_CONTAINER/rhel6/label/DOCKER-LOW/app_root/x86_64-linux-gcc/percona-server-5.6.34-79.1/storage/innobase/srv/srv0srv.cc line 2654

Notice how in the main thread semaphore wait details we can read:
> Last time read locked in file row0upd.cc line 149

And when I look at line 149 of row0upd.cc (5.6.34) I see the following, which was also seen in previous cases where I observed this same pattern
> row_mysql_freeze_data_dictionary(trx);

Seems as if the write lock can be acquired with a frozen data dictionary?

InnoDB Monitor Output (without transactions section)
=====================================
2017-05-23 16:43:47 7f19acffc700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 37 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 1263697 srv_active, 0 srv_shutdown, 512 srv_idle
srv_master_thread log flush and writes: 1264209
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 1102504906
--Thread 139705061721856 has waited at row0upd.cc line 149 for 213.00 seconds the semaphore:
S-lock on RW-latch at 0x13890a0 '&dict_operation_lock'
a writer (thread id 139748284823296) has reserved it in mode wait exclusive
number of readers 100, waiters flag 1, lock_word: ffffffffffffff9c
Last time read locked in file row0upd.cc line 149
Last time write locked in file /var/builds/workspace/155687-v3-component/BUILD_CONTAINER/rhel6/label/DOCKER-LOW/app_root/x86_64-linux-gcc/percona-server-5.6.34-79.1/storage/innobase/srv/srv0srv.cc line 2654
--Thread 139704501491456 has waited at btr0cur.cc line 2195 for 6.0000 seconds the semaphore:
X-lock on RW-latch at 0x7f22bec14968 '&btr_search...

Revision history for this message
jianwei.zhao (xpchild07) wrote :

we also face this problem when used pt-online-schem-change tool, exact same phenomenon with pasted as above.
is there anything in progress?

Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PS-3312

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.