Percona Server with XtraDB

innodb_buffer_pool_pages_index performance

Reported by Peter Zaitsev on 2010-05-05
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Percona Server
High
Unassigned

Bug Description

Accerssing innodb_buffer_pool_pages_index can be very slow under load,
it also can cause certain mutexes to be locked for long time causing server stall and potentially server crash (if locks are held more than allowed by Innodb watchdog)

----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
--Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00 seconds the semaphore:
Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
waiters flag 1
--Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds the semaphore:
Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
waiters flag 1
--Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00 seconds the semaphore:
Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
waiters flag 1
--Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00 seconds the semaphore:
Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
waiters flag 1
Mutex spin waits 24624, rounds 144847, OS waits 3454
RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl

Changed in percona-server:
status: New → Triaged
importance: Undecided → High
assignee: nobody → Yasufumi Kinoshita (yasufumi-kinoshita)
milestone: none → 5.1.46-rel10.1

Peter,
Which version do you use?

And, this information shows "who waits" only.
I 'd like to know is "who make them wait" rather than above.

So, I need stacktrace at the time.
I cannot analyze because information is shortage.

Vadim Tkachenko (vadim-tk) wrote :

Peter,

Please provide stacktraces using instructions
http://www.percona.com/docs/wiki/howto:debug_lock

Vadim,

I sent the oprofile report to Yasufumi separately.

On Wed, May 5, 2010 at 7:03 PM, Vadim Tkachenko <email address hidden> wrote:

> Peter,
>
> Please provide stacktraces using instructions
> http://www.percona.com/docs/wiki/howto:debug_lock
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server
> stall and potentially server crash (if locks are held more than allowed by
> Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds
> the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
> seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/percona-server/+bug/576041/+subscribe
>

--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
http://www.percona.com/training/

Peter Zaitsev (pz-percona) wrote :

Yasufumi,

This is MySQL 5.1.45-rel10

Please run the test case I have suggested altering tables to Innodb. I trust
you can just do it with one table in the loop. If you can't repeat it I
will get more information. I did not bother because It breaks in couple of
different ways and I'd like you to stress test this functionality to ensure
it works well - there may be multiple issues with it.

On Wed, May 5, 2010 at 6:48 PM, Yasufumi Kinoshita <
<email address hidden>> wrote:

> Peter,
> Which version do you use?
>
> And, this information shows "who waits" only.
> I 'd like to know is "who make them wait" rather than above.
>
> So, I need stacktrace at the time.
> I cannot analyze because information is shortage.
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server
> stall and potentially server crash (if locks are held more than allowed by
> Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds
> the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
> seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/percona-server/+bug/576041/+subscribe
>

--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
http://www.percona.com/training/

Vadim Tkachenko (vadim-tk) wrote :
Download full text (3.8 KiB)

Peter,

oprofile does not have call stack information, that is what we need there.

On Wed, May 5, 2010 at 10:15 PM, Peter Zaitsev <email address hidden> wrote:
> Vadim,
>
> I sent the oprofile report to Yasufumi separately.
>
>
> On Wed, May 5, 2010 at 7:03 PM, Vadim Tkachenko <email address hidden> wrote:
>
>> Peter,
>>
>> Please provide stacktraces using instructions
>> http://www.percona.com/docs/wiki/howto:debug_lock
>>
>> --
>> innodb_buffer_pool_pages_index performance
>> https://bugs.launchpad.net/bugs/576041
>> You received this bug notification because you are a direct subscriber
>> of the bug.
>>
>> Status in Percona Server with XtraDB: Triaged
>>
>> Bug description:
>> Accerssing innodb_buffer_pool_pages_index  can be very slow under load,
>> it also can cause certain mutexes to be locked for long time causing server
>> stall and potentially server crash (if locks are held more than allowed by
>> Innodb watchdog)
>>
>> ----------
>> SEMAPHORES
>> ----------
>> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
>> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
>> seconds the semaphore:
>> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
>> waiters flag 1
>> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds
>> the semaphore:
>> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
>> waiters flag 1
>> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
>> seconds the semaphore:
>> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
>> waiters flag 1
>> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
>> seconds the semaphore:
>> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
>> waiters flag 1
>> Mutex spin waits 24624, rounds 144847, OS waits 3454
>> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
>> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>>
>> To unsubscribe from this bug, go to:
>> https://bugs.launchpad.net/percona-server/+bug/576041/+subscribe
>>
>
>
> --
> Peter Zaitsev, CEO, Percona Inc.
> Tel: +1 888 401 3401 ext 501   Skype:  peter_zaitsev
> 24/7 Emergency Line +1 888 401 3401 ext 911
>
> Percona Training Workshops
> http://www.percona.com/training/
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a member of Percona
> developers, which is the registrant for Percona Server.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index  can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server stall and potentially server crash (if locks are held more than allowed by Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --T...

Read more...

Peter, as I mailed.

There seems to be 2 type of problems.

1.
I cannot improve innodb_buffer_pool_pages_index if you want columns as it is.
needs to remove name information (db, table, index). To pickup name, dict_sys is needed and
the looking up is slow to be used for all of blocks in BP.

2.
i_s_innodb_buffer_pool_pages.patch
uses mutex in wrong order and may cause deadlock in worse.

I will fix 2. but I cannot fix 1 without contract of function.

Vadim Tkachenko (vadim-tk) wrote :

Yasufumi,

Can't we remove locking of dict_sys in innodb_buffer_pool_pages_index ?

Information in innodb_buffer_pool_pages_index is not critical
and it is OK if it is non-consistent.

On Fri, May 7, 2010 at 1:47 AM, Yasufumi Kinoshita
<email address hidden> wrote:
> Peter, as I mailed.
>
> There seems to be 2 type of problems.
>
> 1.
> I cannot improve innodb_buffer_pool_pages_index if you want columns as it is.
> needs to remove name information (db, table, index). To pickup name, dict_sys is needed and
> the looking up is slow to be used for all of blocks in BP.
>
> 2.
> i_s_innodb_buffer_pool_pages.patch
> uses mutex in wrong order and may cause deadlock in worse.
>
> I will fix 2. but I cannot fix 1 without contract of function.
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a member of Percona
> developers, which is the registrant for Percona Server.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index  can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server stall and potentially server crash (if locks are held more than allowed by Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00 seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
>
>

--
Vadim Tkachenko, CTO, Percona Inc.
Phone +1-888-401-3403, Skype: vadimtk153
Schedule meeting: http://tungle.me/VadimTkachenko

Peter Zaitsev (pz-percona) wrote :
Download full text (4.8 KiB)

Yasufumi,

There is no point removing functionality as there is another table which has
similar columns but without table name and index name.
what is important is to ensure it is not intrusive - does not block
everything for long time. Perhaps it has to require mutex for each row ?

I'm also surprised why it takes so long - 3 min to lookup the name of the
table and index by ID for 40000 pool pages sounda a bit slow to me

Finally note there is typically a lot of pages which belong to the same
index of the same table. Can't we cache table lookups so we just lookup
each table only once ?

On Fri, May 7, 2010 at 6:52 AM, Vadim Tkachenko <email address hidden> wrote:

> Yasufumi,
>
> Can't we remove locking of dict_sys in innodb_buffer_pool_pages_index ?
>
> Information in innodb_buffer_pool_pages_index is not critical
> and it is OK if it is non-consistent.
>
>
> On Fri, May 7, 2010 at 1:47 AM, Yasufumi Kinoshita
> <email address hidden> wrote:
> > Peter, as I mailed.
> >
> > There seems to be 2 type of problems.
> >
> > 1.
> > I cannot improve innodb_buffer_pool_pages_index if you want columns as it
> is.
> > needs to remove name information (db, table, index). To pickup name,
> dict_sys is needed and
> > the looking up is slow to be used for all of blocks in BP.
> >
> > 2.
> > i_s_innodb_buffer_pool_pages.patch
> > uses mutex in wrong order and may cause deadlock in worse.
> >
> > I will fix 2. but I cannot fix 1 without contract of function.
> >
> > --
> > innodb_buffer_pool_pages_index performance
> > https://bugs.launchpad.net/bugs/576041
> > You received this bug notification because you are a member of Percona
> > developers, which is the registrant for Percona Server.
> >
> > Status in Percona Server with XtraDB: Triaged
> >
> > Bug description:
> > Accerssing innodb_buffer_pool_pages_index can be very slow under load,
> > it also can cause certain mutexes to be locked for long time causing
> server stall and potentially server crash (if locks are held more than
> allowed by Innodb watchdog)
> >
> > ----------
> > SEMAPHORES
> > ----------
> > OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> > --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
> seconds the semaphore:
> > Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> > waiters flag 1
> > --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00
> seconds the semaphore:
> > Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> > waiters flag 1
> > --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
> seconds the semaphore:
> > Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> > waiters flag 1
> > --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
> seconds the semaphore:
> > Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> > waiters flag 1
> > Mutex spin waits 24624, rounds 144847, OS waits 3454
> > RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> > Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
> >
> >
> >
>
>
> --
> Vadim Tkachenko, CTO, Percona Inc.
> Phone +1-888-401-3403, Skype: vadimtk153
> Schedule meeting: http://tungle.me/VadimTkachenko
>
> --
> innod...

Read more...

Vadim,

Simply removing dict_sys->mutex may cause crash as result of the inconsistency.
Only removing name columns allows to remove the lock.

Peter,

innodb_buffer_pool_pages_index is really nonsense for me from the view point of performance.
There are no index at the table list, index list in the dictionary on memory.
(It was removed because of another performance problem from maintaining the indexes in InnoDB's history.
 Basically InnoDB doesn't use index_id to specify the index information internal.
 So, to maintain indexes is only waste of cpu and cause useless mutex contention.)

innodb_buffer_pool_pages_index is the cartesian join of non-indexed tables.
3min is natural, if you use many tables and indexes in the on-memory dictionary.

innodb_buffer_pool_pages_index should not have name columns.
If you want names, you should join with index_id (after aggregate) to the another dict view.
(InnoDB must not provide index_id index for dictionary only for innodb_buffer_pool_pages_index
 with sacrifice ordinary all other performance)

So I recommend to remove the name columns from innodb_buffer_pool_pages_index.

If I implemented it, I never implement name columns.

Peter Zaitsev (pz-percona) wrote :
Download full text (3.2 KiB)

Yasufumi,

OK I see. It makes sense. See my another email I proposed to join these 3
innodb_buffer_pool pages tabels we have now into one.

On Sun, May 9, 2010 at 5:54 PM, Yasufumi Kinoshita <
<email address hidden>> wrote:

> Vadim,
>
> Simply removing dict_sys->mutex may cause crash as result of the
> inconsistency.
> Only removing name columns allows to remove the lock.
>
> Peter,
>
> innodb_buffer_pool_pages_index is really nonsense for me from the view
> point of performance.
> There are no index at the table list, index list in the dictionary on
> memory.
> (It was removed because of another performance problem from maintaining the
> indexes in InnoDB's history.
> Basically InnoDB doesn't use index_id to specify the index information
> internal.
> So, to maintain indexes is only waste of cpu and cause useless mutex
> contention.)
>
> innodb_buffer_pool_pages_index is the cartesian join of non-indexed tables.
> 3min is natural, if you use many tables and indexes in the on-memory
> dictionary.
>
> innodb_buffer_pool_pages_index should not have name columns.
> If you want names, you should join with index_id (after aggregate) to the
> another dict view.
> (InnoDB must not provide index_id index for dictionary only for
> innodb_buffer_pool_pages_index
> with sacrifice ordinary all other performance)
>
> So I recommend to remove the name columns from
> innodb_buffer_pool_pages_index.
>
> If I implemented it, I never implement name columns.
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server
> stall and potentially server crash (if locks are held more than allowed by
> Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds
> the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
> seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/percona-server/+bug/576041/+subscribe
>

--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
...

Read more...

You should join with INNODB_SYS_TABLES or INNODB_SYS_INDEXES after aggregation of INNODB_BUFFER_POOL_PAGES_INDEX, if you need names.

Anyway, don't you look raw data of the INNODB_BUFFER_POOL_PAGES_INDEX?
We should normalize the views.

Peter Zaitsev (pz-percona) wrote :

Yes. This is best option. It is a good question if we wan expose SYS_TABLES
and SYS_INDEXES in a way they have index so join can be at reasonable speed.

On Sun, May 9, 2010 at 6:21 PM, Yasufumi Kinoshita <
<email address hidden>> wrote:

> You should join with INNODB_SYS_TABLES or INNODB_SYS_INDEXES after
> aggregation of INNODB_BUFFER_POOL_PAGES_INDEX, if you need names.
>
> Anyway, don't you look raw data of the INNODB_BUFFER_POOL_PAGES_INDEX?
> We should normalize the views.
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server
> stall and potentially server crash (if locks are held more than allowed by
> Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds
> the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
> seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/percona-server/+bug/576041/+subscribe
>

--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
http://www.percona.com/training/

Peter, I think what you say is the next step..
Currently I don't know how to create (make the internal index used) index to i_s...

Currently, "Using join buffer" seems to be used for joining i_s views by looking explain command.
I think it is more optimized if you use "aggregate INNODB_BUFFER_POOL_PAGES_INDEX and join INNODB_SYS_TABLES or INNODB_SYS_INDEXES" than current INNODB_BUFFER_POOL_PAGES_INDEX implementation.

So, what should I do for the bug are...

(1) remove necessity to use dictionary and remove dict_sys->mutex using.
        (remove columns which need to access the dictionary, and add raw id to join later)

(2) fix/optimize mutex/lock using

Are they OK for now?

Changed in percona-server:
milestone: 5.1.46-rel11 → 11.0
Changed in percona-server:
milestone: 11.0-old → 11.0
Changed in percona-server:
milestone: 5.1-11.0 → 5.1-11.1
Changed in percona-server:
milestone: 5.1-11.1 → 5.1-12.0
Changed in percona-server:
milestone: 5.1.47-12.0 → 5.1-12.0
Peter Zaitsev (pz-percona) wrote :

Yasufumi,

Can lock dict mutex many times (for each table name lookup) instead of once for all query duration ?

In the end you're right if the same data can be done getting view we should NOT do it as a base table.

But lets at least make it feature safe to use.

I think in general we need to include "percona" schema in Percona Server distrubution which should provide all kind of cool views

Peter,

> Can lock dict mutex many times (for each table name lookup) instead of once for all query duration ?

Then the blocking point will move to buf_pool->mutex simply, and the blocking time will be more longer...

I will remove the column 'schema_name', 'table_name' and 'index_name' from INNODB_BUFFER_POOL_PAGES_INDEX,
and add 'table_id' and 'index_id' to the view instead. And get rid of using dict_sys->mutex from the views.

Anyway, such raw feature should not be used by amateur who don't know 'table_id' and 'index_id'.
And the view is used for aggregate the count of the pages anyway?
You should lookup name after the aggregation by using INNODB_SYS_TABLES, INNODB_SYS_INDEXES.

Peter Zaitsev (pz-percona) wrote :

> Then the blocking point will move to buf_pool->mutex simply, and the
> blocking time will be more longer...
>
>
Do we have to hold it for duration of full query ? This can be too long for
very large buffer pools.

>
> I will remove the column 'schema_name', 'table_name' and 'index_name' from
> INNODB_BUFFER_POOL_PAGES_INDEX,
> and add 'table_id' and 'index_id' to the view instead. And get rid of using
> dict_sys->mutex from the views.
>
>
OK fine.

> Anyway, such raw feature should not be used by amateur who don't know
> 'table_id' and 'index_id'.
> And the view is used for aggregate the count of the pages anyway?
> You should lookup name after the aggregation by using INNODB_SYS_TABLES,
> INNODB_SYS_INDEXES.
>
> --
>
--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
http://www.percona.com/training/

> Do we have to hold it for duration of full query ? This can be too long for
> very large buffer pools.

Ok. I will try to make it more optimistic use of mutex. I think the result of the views are not needed to be so consistent.

Peter Zaitsev (pz-percona) wrote :

Yasufumi,

Yes indeed . The low overhead/impact is a lot more in this case than
absolute consistence.

On Mon, Jul 19, 2010 at 9:32 PM, Yasufumi Kinoshita <
<email address hidden>> wrote:

> > Do we have to hold it for duration of full query ? This can be too long
> for
> > very large buffer pools.
>
> Ok. I will try to make it more optimistic use of mutex. I think the
> result of the views are not needed to be so consistent.
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in Percona Server with XtraDB: Triaged
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server
> stall and potentially server crash (if locks are held more than allowed by
> Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds
> the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00
> seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00
> seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/percona-server/+bug/576041/+subscribe
>

--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911

Percona Training Workshops
http://www.percona.com/training/

Changed in percona-server:
status: Triaged → Fix Committed
Changed in percona-server:
status: Fix Committed → In Progress
status: In Progress → Fix Released

Does this bug affect "Show tables" lookups as well? Because we have a system with roughly 45,000 tables and the "show tables" command can block for between 30 seconds and 1 minute. Would this fix that problem as well, or is only to queries run directly on the innodb_buffer_pool_pages_index table?

Vadim Tkachenko (vadim-tk) wrote :

Justin,

"show tables" is different story.
And by internal design , 45,000 never will be run fast, as show tables
has to read .frm files,
and for that amount of files, it always going to be slow.

On Wed, Sep 29, 2010 at 9:32 AM, Justin Gronfur
<email address hidden> wrote:
> Does this bug affect "Show tables" lookups as well?  Because we have a
> system with roughly 45,000 tables and the "show tables" command can
> block for between 30 seconds and 1 minute.  Would this fix that problem
> as well, or is only to queries run directly on the
> innodb_buffer_pool_pages_index table?
>
> --
> innodb_buffer_pool_pages_index performance
> https://bugs.launchpad.net/bugs/576041
> You received this bug notification because you are a member of Percona
> developers, which is the registrant for Percona Server.
>
> Status in Percona Server with XtraDB: Fix Released
>
> Bug description:
> Accerssing innodb_buffer_pool_pages_index  can be very slow under load,
> it also can cause certain mutexes to be locked for long time causing server stall and potentially server crash (if locks are held more than allowed by Innodb watchdog)
>
> ----------
> SEMAPHORES
> ----------
> OS WAIT ARRAY INFO: reservation count 4112, signal count 4016
> --Thread 1235454272 has waited at buf/buf0lru.c line 2046 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1247275328 has waited at buf/buf0lru.c line 698 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1214474560 has waited at buf/buf0flu.c line 1077 for 127.00 seconds the semaphore:
> Mutex at 0xde2020 '&buf_pool_mutex', lock var 1
> waiters flag 1
> --Thread 1245944128 has waited at row/row0purge.c line 543 for 122.00 seconds the semaphore:
> Mutex at 0x6812220 '&dict_sys->mutex', lock var 1
> waiters flag 1
> Mutex spin waits 24624, rounds 144847, OS waits 3454
> RW-shared spins 813, OS waits 483; RW-excl spins 48, OS waits 154
> Spin rounds per wait: 5.88 mutex, 22.41 RW-shared, 104.08 RW-excl
>
>
>

--
Vadim Tkachenko, CTO, Percona Inc.
Phone +1-888-401-3403,  Skype: vadimtk153
Schedule meeting: http://tungle.me/VadimTkachenko

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers