innodb_row_lock_current_waits spikes to 18446744073709551615
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Percona Server moved to https://jira.percona.com/projects/PS | Status tracked in 5.7 | |||||
5.1 |
Won't Fix
|
Undecided
|
Unassigned | |||
5.5 |
New
|
Undecided
|
Unassigned | |||
5.6 |
New
|
Medium
|
Unassigned | |||
5.7 |
New
|
Medium
|
Unassigned |
Bug Description
From error log (using fprintf on innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
Observed first by PZ during a sysbench run with PCT monitoring. Sysbench run may have been interrupted at that point. This can be reproduced with kill -9 sysbench regularly.
Testcase is exactly as described here; https:/
$ cat sysbenchnew_
while (true);do
clear
grep "Innodb_
echo "------
grep "Questions" 2.log | sed 's|[A-Za-z_ \t]\+||' | sort -n | tail -n30
sleep 1
done
This will refresh the screen and at the top (above the line) show the top 10 Innodb_
Interestingly, PZ's value jumped to 184467440737095
Changed in percona-server: | |
assignee: | Laurynas Biveinis (laurynas-biveinis) → nobody |
I will try and add to an assert on outside-of-range innodb_ row_lock_ current_ waits and add a coredump. It may not be the best location to catch info in the code though. Still, thd info will be available, and it will be at a point where the var has such a high value.
$ bzr diff # Patch used innobase/ srv/srv0srv. cc' innobase/ srv/srv0srv. cc 2014-09-25 14:16:07 +0000 innobase/ srv/srv0srv. cc 2014-11-12 09:40:50 +0000
export_ vars.innodb_ row_lock_ current_ waits =
srv_stats. n_lock_ wait_current_ count;
=== modified file 'storage/
--- storage/
+++ storage/
@@ -1808,6 +1808,8 @@
+fprintf(stderr, "innodb_ row_lock_ current_ waits = %lu\n", (ulong) srv_stats. n_lock_ wait_current_ count);
export_ vars.innodb_ row_lock_ time = srv_stats. n_lock_ wait_time / 1000;
+
if (srv_stats. n_lock_ wait_count > 0) {