Web client: patron billing history results in long running query
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Evergreen |
Fix Released
|
High
|
Unassigned | ||
2.12 |
Won't Fix
|
High
|
Unassigned | ||
3.0 |
Fix Released
|
High
|
Unassigned |
Bug Description
We are seeing long running queries:
"SELECT "mp".amount, "mp".id, "mp".note, "mp".payment_ts, "mp".payment_type, "mp".xact, "mp".voided FROM money.payment_view AS "mp"
LEFT JOIN money.billable_xact AS "__title_mbt" ON ( "__title_mbt".id = "mp".xact )
LEFT JOIN action.circulation AS "__title_circ" ON ( "__title_circ".id = "__title_mbt".id )
LEFT JOIN asset.copy AS "__title_acp" ON ( "__title_acp".id = "__title_
LEFT JOIN asset.call_number AS "__title_acn" ON ( "__title_acn".id = "__title_
LEFT JOIN biblio.record_entry AS "__title_bre" ON ( "__title_bre".id = "__title_
LEFT JOIN reporter.
LEFT JOIN money.billable_xact AS "__record_id_mbt" ON ( "__record_
LEFT JOIN action.circulation AS "__record_id_circ" ON ( "__record_
LEFT JOIN asset.copy AS "__record_id_acp" ON ( "__record_
LEFT JOIN asset.call_number AS "__record_id_acn" ON ( "__record_
LEFT JOIN biblio.record_entry AS "__record_id_bre" ON ( "__record_
LEFT JOIN money.billable_xact AS "__copy_
LEFT JOIN action.circulation AS "__copy_
LEFT JOIN asset.copy AS "__copy_
LEFT JOIN money.billable_xact AS "__xact.
LEFT JOIN money.materiali
LEFT JOIN money.billable_xact AS "__xact.usr_mbt" ON ( "__xact.usr_mbt".id = "mp".xact )
WHERE "mp".payment_ts BETWEEN '2016-12-15' AND 'now' AND ( "__xact.
This is a result of opening this:
/eg/staff/
The explain analyze at the bottom.
from berick
" so, the query is generated by open-ils.
IRC Discussion:
http://
"Nested Loop Left Join (cost=5.
" -> Nested Loop (cost=5.
" -> Nested Loop Left Join (cost=5.
" -> Nested Loop Left Join (cost=5.
" -> Nested Loop Left Join (cost=4.
" -> Nested Loop Left Join (cost=3.
" -> Nested Loop (cost=2.
" -> Merge Append (cost=2.
" Sort Key: p.payment_ts"
" -> Index Scan using m_p_time_idx on payment p (cost=0.12..4.14 rows=1 width=75) (actual time=0.005..0.005 rows=0 loops=1)"
" -> Sort (cost=0.01..0.02 rows=1 width=75) (actual time=0.010..0.010 rows=0 loops=1)"
" Sort Key: p_1.payment_ts"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on bnm_payment p_1 (cost=0.00..0.00 rows=1 width=75) (actual time=0.001..0.001 rows=0 loops=1)"
" -> Index Scan using money_account_
" -> Sort (cost=0.01..0.02 rows=1 width=75) (actual time=0.007..0.007 rows=0 loops=1)"
" Sort Key: p_3.payment_ts"
" Sort Method: quicksort Memory: 25kB"
" -> Seq Scan on bnm_desk_payment p_3 (cost=0.00..0.00 rows=1 width=75) (actual time=0.000..0.000 rows=0 loops=1)"
" -> Index Scan using money_credit_
" -> Index Scan using money_forgive_
" -> Index Scan using money_goods_
" -> Index Scan using money_work_
" -> Index Scan using money_cash_
" -> Index Scan using money_check_
" -> Index Scan using money_credit_
" -> Index Scan using pg_class_oid_index on pg_class c (cost=0.28..0.30 rows=1 width=68) (actual time=0.001..0.002 rows=1 loops=4568817)"
" Index Cond: (oid = p.tableoid)"
" -> Nested Loop Left Join (cost=0.86..5.85 rows=4 width=8) (actual time=0.009..0.014 rows=1 loops=4568817)"
" -> Nested Loop Left Join (cost=0.43..3.26 rows=4 width=16) (actual time=0.006..0.009 rows=1 loops=4568817)"
" -> Append (cost=0.00..1.11 rows=4 width=8) (actual time=0.003..0.006 rows=1 loops=4568817)"
" -> Seq Scan on billable_xact __title_mbt (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=4568817)"
" Filter: (id = p.xact)"
" -> Index Only Scan using circulation_pkey on circulation __title_mbt_1 (cost=0.43..0.51 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=4568817)"
" Index Cond: (id = p.xact)"
" Heap Fetches: 3168243"
" -> Index Only Scan using reservation_pkey on reservation __title_mbt_2 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Heap Fetches: 0"
" -> Index Only Scan using grocery_pkey on grocery __title_mbt_3 (cost=0.42..0.44 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Heap Fetches: 134630"
" -> Index Scan using circulation_pkey on circulation __title_circ (cost=0.43..0.53 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=4545456)"
" Index Cond: (id = __title_mbt.id)"
" -> Append (cost=0.43..0.63 rows=2 width=16) (actual time=0.003..0.004 rows=1 loops=4545456)"
" -> Index Scan using copy_pkey on copy __title_acp (cost=0.43..0.46 rows=1 width=16) (actual time=0.003..0.003 rows=1 loops=4545456)"
" Index Cond: (id = __title_
" -> Index Scan using unit_pkey on unit __title_acp_1 (cost=0.14..0.16 rows=1 width=16) (actual time=0.000..0.000 rows=0 loops=4545456)"
" Index Cond: (id = __title_
" -> Nested Loop Left Join (cost=0.86..5.85 rows=4 width=8) (actual time=0.007..0.012 rows=1 loops=4568817)"
" -> Nested Loop Left Join (cost=0.43..3.26 rows=4 width=16) (actual time=0.005..0.008 rows=1 loops=4568817)"
" -> Append (cost=0.00..1.11 rows=4 width=8) (actual time=0.003..0.005 rows=1 loops=4568817)"
" -> Seq Scan on billable_xact __record_id_mbt (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=4568817)"
" Filter: (id = p.xact)"
" -> Index Only Scan using circulation_pkey on circulation __record_id_mbt_1 (cost=0.43..0.51 rows=1 width=8) (actual time=0.001..0.002 rows=1 loops=4568817)"
" Index Cond: (id = p.xact)"
" Heap Fetches: 3168242"
" -> Index Only Scan using reservation_pkey on reservation __record_id_mbt_2 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Heap Fetches: 0"
" -> Index Only Scan using grocery_pkey on grocery __record_id_mbt_3 (cost=0.42..0.44 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Heap Fetches: 134630"
" -> Index Scan using circulation_pkey on circulation __record_id_circ (cost=0.43..0.53 rows=1 width=16) (actual time=0.001..0.002 rows=1 loops=4545456)"
" Index Cond: (id = __record_
" -> Append (cost=0.43..0.63 rows=2 width=16) (actual time=0.002..0.003 rows=1 loops=4545456)"
" -> Index Scan using copy_pkey on copy __record_id_acp (cost=0.43..0.46 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=4545456)"
" Index Cond: (id = __record_
" -> Index Scan using unit_pkey on unit __record_id_acp_1 (cost=0.14..0.16 rows=1 width=16) (actual time=0.000..0.000 rows=0 loops=4545456)"
" Index Cond: (id = __record_
" -> Nested Loop Left Join (cost=0.43..3.26 rows=4 width=16) (actual time=0.005..0.008 rows=1 loops=4568817)"
" -> Append (cost=0.00..1.11 rows=4 width=8) (actual time=0.003..0.005 rows=1 loops=4568817)"
" -> Seq Scan on billable_xact __copy_barcode_mbt (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=4568817)"
" Filter: (id = p.xact)"
" -> Index Only Scan using circulation_pkey on circulation __copy_
" Index Cond: (id = p.xact)"
" Heap Fetches: 3168242"
" -> Index Only Scan using reservation_pkey on reservation __copy_
" Index Cond: (id = p.xact)"
" Heap Fetches: 0"
" -> Index Only Scan using grocery_pkey on grocery __copy_
" Index Cond: (id = p.xact)"
" Heap Fetches: 134630"
" -> Index Scan using circulation_pkey on circulation __copy_barcode_circ (cost=0.43..0.53 rows=1 width=16) (actual time=0.001..0.002 rows=1 loops=4545456)"
" Index Cond: (id = __copy_
" -> Append (cost=0.43..0.62 rows=2 width=8) (actual time=0.002..0.003 rows=1 loops=4568817)"
" -> Index Only Scan using copy_pkey on copy __copy_barcode_acp (cost=0.43..0.46 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=4568817)"
" Index Cond: (id = __copy_
" Heap Fetches: 3974892"
" -> Index Only Scan using unit_pkey on unit __copy_
" Index Cond: (id = __copy_
" Heap Fetches: 0"
" -> Append (cost=0.00..1.22 rows=4 width=8) (actual time=0.005..0.005 rows=0 loops=4568817)"
" -> Seq Scan on billable_xact "__xact.usr_mbt" (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=4568817)"
" Filter: ((usr = 425135) AND (p.xact = id))"
" -> Index Scan using circulation_pkey on circulation "__xact.usr_mbt_1" (cost=0.43..0.62 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Filter: (usr = 425135)"
" Rows Removed by Filter: 1"
" -> Index Scan using reservation_pkey on reservation "__xact.usr_mbt_2" (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Filter: (usr = 425135)"
" -> Index Scan using grocery_pkey on grocery "__xact.usr_mbt_3" (cost=0.42..0.45 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=4568817)"
" Index Cond: (id = p.xact)"
" Filter: (usr = 425135)"
" Rows Removed by Filter: 0"
" -> Append (cost=0.00..1.11 rows=4 width=8) (actual time=0.011..0.029 rows=1 loops=2)"
" -> Seq Scan on billable_xact "__xact.
" Filter: (id = p.xact)"
" -> Index Only Scan using circulation_pkey on circulation "__xact.
" Index Cond: (id = p.xact)"
" Heap Fetches: 2"
" -> Index Only Scan using reservation_pkey on reservation "__xact.
" Index Cond: (id = p.xact)"
" Heap Fetches: 0"
" -> Index Only Scan using grocery_pkey on grocery "__xact.
" Index Cond: (id = p.xact)"
" Heap Fetches: 0"
"Planning time: 4.570 ms"
"Execution time: 225992.136 ms"
Changed in evergreen: | |
status: | New → Confirmed |
Changed in evergreen: | |
importance: | Medium → High |
importance: | High → Medium |
Changed in evergreen: | |
importance: | Medium → High |
Changed in evergreen: | |
assignee: | nobody → Dan Wells (dbw2) |
tags: | added: pullrequest |
Changed in evergreen: | |
assignee: | Dan Wells (dbw2) → nobody |
milestone: | none → 3.1-rc |
Changed in evergreen: | |
assignee: | nobody → Galen Charlton (gmc) |
Changed in evergreen: | |
status: | Fix Committed → Fix Released |
At C/W MARS we're seeing this in both 2.12 and 3.0.2 in the web staff client. I've seen the above query running for 6 days on our training database (2.12) and testing database (3.0.2). The testing database is stock Evergreen 3.0.2 with anonymized local data. Training has customizations applied to the OPAC. We have not made any direct customization to the web staff client.
In production, on 2.12.8, we do see this query run long enough that no results get back to the web staff client. However, it does not run for days. I've not managed to actually time it, but it seems to run between 2 to 4 minutes. (Our production db server is a lot faster than the others.)