web client: Sorting for view record holds and holds shelf interfaces

Bug #1712854 reported by Kathy Lussier on 2017-08-24
This bug affects 23 people
Affects Status Importance Assigned to Milestone

Bug Description

I'm breaking this bug off from bug 1437104 since sorting in these interfaces is critical.

There are two issues with sorting in the view record holds interface.

- It isn't clear what the default sort is in this interface. In the xul client, it appears to sort reverse chronologically by request date.
- Staff frequently need to sort this list. Common sort fields are by request date, queue position, and last name. Although sorting was recently added for many patron grids, this grid is a critical one that does not yet have sorting available.

Beth Willis (willis-a) wrote :

Holds appear to sort by hold ID, highest to lowest (or reverse chronological order by request date).

Changed in evergreen:
status: New → Confirmed
Cesar V (cesardv) on 2017-10-25
Changed in evergreen:
assignee: nobody → Cesar V (cesardv)
Cesar V (cesardv) wrote :

There was actually no sorting at all on that record holds grid, IIRC the data just comes back in the order that it was retrieved from the database.

I've added code to enable sorting on the records holds grid and the holds shelf, branch below.
Enabling sorting on record holds grid was a bit involved as caching was not there, and there was an issue with password hashes being returned for hold.usr and hold.requestor, that was addressed as well (on last commit).


tags: added: pullrequest
Cesar V (cesardv) on 2017-11-01
Changed in evergreen:
assignee: Cesar V (cesardv) → nobody
Kathy Lussier (klussier) on 2017-11-27
Changed in evergreen:
assignee: nobody → Kathy Lussier (klussier)
Kathy Lussier (klussier) wrote :

Thanks Cesar! I tested this today and discovered a problem that may also be an issue with the other grids using clientsort. The sorting works as expected on a small set of data, but once the dataset is large enough to require paging, it looks like we are only sorting one page at a time.

I tested the above code on a record with 177 holds. My display defaults to showing 25 holds at a time. If I sort by last name on the first page of holds, it appears to only sort those 25 holds that appeared on the first page. If I click to page to the 2nd page of results, it no longer sorts by last name. If I click back to the first page, it is sorting by last name again, but it appears to now be sorting the 50 holds that displayed on the first two pages.

I have a screencast available at https://drive.google.com/file/d/1e57ve2MdcPrgHpDfQ-LVqMyoCH_zhBjG/view that shows this behavior.

I haven't checked the other interfaces to see if the same thing happens there.

Changed in evergreen:
assignee: Kathy Lussier (klussier) → nobody
Mike Rylander (mrylander) wrote :


The sort issue you found is specific to this grid. The reason for the issue is that client-sort requires all data be available on the client side in order to work, but this grid does not supply data that way yet. Other client-sort grids adjusted in bug # 1697954 do, though, and won't have this issue.

Kathy Lussier (klussier) wrote :

Thank you for the further explanation Mike! That's good to know.

In the case of this particular grid, then, can we supply the data in the same way as other grids so that sorting can work as expected?

Mike Rylander (mrylander) wrote :

We should be able to, but the risk is slowness for large hold lists. It's worth a try, though, I think.

Kathy Lussier (klussier) wrote :

Adding a note that sorting in Browse Holds Shelf works the same way. I also checked negative balance users (another grid touched by this branch), but was unable to determine if it experienced the same issue due to bug 1735791.

tags: added: needsrepatch
removed: pullrequest
Mike Rylander (mrylander) wrote :

I've pushed a branch that causes the Hold Shelf and Record > View Holds UIs to load all hold data up front. It works fine with my small dataset, of course, but will need testing in large environments to confirm that it's usable.


Kathy Lussier (klussier) wrote :

I've been looking at this on a test VM and would like to get some of our people to look at it on a system with production data. In the meantime, however, I noticed a couple of issues.

- With this branch loaded (both with and without Mike's commit), I notice that if I change the pickup library for this grid, it doesn't automatically update to show holds for the new library. See the screencast of a test that was performed on a test VM that just had Cesar's portion of the work - https://drive.google.com/file/d/17oXj9ZX4GnMfCHDHzvlc9nyHj_CWBIe4/view

- On the test VM that has the latest branch with Mike's commit, I noticed that when I first retrieve the record holds interface, it lists all of the holds on the record, no matter what my row count is set at. I knew the most recent change would retrieve the data for all holds on the record, but I didn't expect it to display all at one time.

Terran McCanna (tmccanna) wrote :

We put this on our test server that has a copy of production data and it took a full five minutes to load a branch that had a large hold shelf - not sure how many items was on the shelf, but more then 1200. When I tried to get farther, it got confused and started redownloading the whole list again.

Cesar V (cesardv) wrote :

Hi Terran, thanks for testing this!
Are holds shelves with 1000+ holds a common occurrence?
I was under the impression those were the exception/outliers.

When I tested this, ~5min for ~1000 holds was around what it took to process pre-load all holds in order to enable client-side sorting, unfortunately, not sure there's a way around that, that's quite a bit of data. Even with the fetch batch size increased to 25 from 15 IIRC.

Terran McCanna (tmccanna) wrote :

That was at one of our larger branches, but they're not the only one, and we expect to add additional large locations in the future.

It would work better to keep the old method but change the CSV download to download all items instead of just the current page's downloads.

Kathy Lussier (klussier) wrote :

Adding a note that I saw similar issues with performance. I've been adding 178 holds to a record in the Concerto dataset for this testing. When I first tested the code back in December, it took about 20 seconds to load the holds on this record. When I tested with the new branch this morning, it took more than 40 seconds to load. Our top bestsellers are likely to have hundreds of holds on them, so the 178 holds on one title seems like a reasonable threshold to test.

Mike Rylander (mrylander) wrote :


When you mention your testing in December, do you mean before my branch, with my branch but without Cesar's fix, or Cesar's branch?


Kathy Lussier (klussier) wrote :

Details, details! ;)

Sorry for the confusion. The December testing was done with the branch you posted in comment #8. I've been taking screencasts while loading the interface to demonstrate load times - https://drive.google.com/file/d/1wJoBsocAHBe1YXDCJpxaCtN8WKW36XfK/view

The testing I did this morning with Cesar's branch, posted in comment #10, can be seen here:

Cesar V (cesardv) wrote :

I've rebased the latest branch for the fixes relating to this LP here:

Additionally, the last commit has an improvement that depends on a fix that prevents the grid paging from messing up and duplicating the "fetch-all-the-holds" request due to grid.totalCount not being set -- that fix to the grid is on bug 1739648, which has yet to be signed off or merged.

Andrea Neiman (aneiman) on 2018-01-24
tags: added: pullrequest
removed: needsrepatch
Kathy Lussier (klussier) wrote :

I just wanted to add a note that I don't support merging this branch at this time. Although we want to see sorting in these interfaces, the slow retrieval speeds would be problematic for our staff. I would prefer to pursue an alternate method that would allow sorting with faster page loading times.

Terran McCanna (tmccanna) wrote :

I agree with Kathy on on not merging this branch.

If the download CSV option downloaded all the results rather than just the results on the current page, that would be a viable alternative. (This would be helpful on all interfaces.)

Scott Thomas (scott-thomas-9) wrote :

Sorting of columns in View Record Holds is critical for PaILS.

Kathy Lussier (klussier) wrote :

Is it critical enough that libraries would be willing to wait this long for the interface to load so that they could have sorting? https://drive.google.com/file/d/1JArB98mpMfWlUYOrzNIY-bSvflCOUiBi/view

MassLNC has contracted with Equinox to find a way to sort these two interfaces without the huge hit on performance, but it may take a while until that work is done (certainly not in time for 3.1).

The question for now, then, is whether we want to push this code in the meantime to provide sorting. If we do, we're going to see a lot of wait time for loading these interfaces, which is equally frustrating for staff. I would prefer to wait until we have the improved sorting, and Terran has indicated a preference for waiting too. I would like to hear if others feel differently.

FWIW, I think the CSV workaround is more workable for the Browse Holds Shelf interface than for the Record Holds interface.

Joan Kranich (jkranich) wrote :

I agree that we do not want to implement anything that increases response time or wait time to load the screen.

In looking in our system at bib records with 600 to 700 Holds, viewing Holds for a specific pickup location is a manageable size list in most cases. I can see where staff will miss sorting when viewing all pickup locations. I still think response time is more important.

Kathy Lussier (klussier) on 2018-05-08
summary: - web client: Sorting for view record holds interface
+ web client: Sorting for view record holds and holds shelf interfaces
description: updated
Diane Disbro (ddisbro) wrote :

Using Missouri Evergreen web client 3.0.9 - Yes, we must be able to sort these lists, especially the Hold Shelf. Waiting for long loading is not a good option. I'm glad you are already working on this issue.

Andrea Neiman (aneiman) wrote :

MassLNC has contracted with Equinox to address the speed issues with holds shelf & record view holds sorting. Specs here: https://yeti.esilibrary.com/dev/public/techspecs/hold_list_speed.pdf

A community branch will be forthcoming.

Jane Sandberg (sandbej) on 2018-07-13
tags: added: sorting
Bill Erickson (berick) on 2018-07-30
tags: added: needsrepatch
removed: pullrequest
Mike Rylander (mrylander) wrote :

Hi all, please find a branch addressing the above at http://git.evergreen-ils.org/?p=working/Evergreen.git;a=shortlog;h=refs/heads/user/miker/lp-1712854-hold_ui_speed_improvements

From the commit message:

The Hold Shelf and Record -> View Holds interfaces are painfully slow when
faced with a large set of holds. The main reason is the 2-stage process used
to gather the data: 1) find hold IDs for the context 2) for each hold, fetch
all the requisite data using a relatively slow API.

Here we create a new API that provides a streaming response of pre-inflated
hold data. The full result set is loaded immediately, and once loaded, the
grid makes use of the "clientsort" feature to provide fast sorting without
the need to go back to the server when only changing the sort columns and
direction, and when paging through results. The Hold Shelf UI now has a
progress indicator that appears whenever the hold list is retrieved from the
server, and the progress indicator for the Record -> View Holds UI is enhanced
to provide more granular updates.

It is expected that other hold interfaces will be able to make use of this new
API for performance and functionality improvements.

NOTE: This includes an additional change required to protect localStorage from
overlarge values when attempting to save the "last printed value" for future
reprinting. Previously, when attempting to print a 2000+ hold list,
localStorage throws an error and the print fails. Now the print will succeed,
but the value is not stored for reprinting and a message is logged to the JS

NOTE: The original development plan was base this a new API on a native
Postgres materialized view, and provide a LISTEN/NOTIFY daemon to monitor for
events that should trigger a refresh of that view. As it happens, the use of
other new Postgres features (primarily the LATERAL join type) allows us to
avoid that maintenance and run-time complication.

tags: added: pullrequest
removed: needsrepatch
Bill Erickson (berick) wrote :

Mike, API looks very useful. I'm curious about the decision to pre-load all of the holds and use client sort, instead of using the shiny new server-side paging and sorting in these UI's, though. Is the assumption that staff will invariably do a lot of sorting and/or page through every hold on these UI's, making the initial extra data load worth it?

Mike Rylander (mrylander) wrote :

Thanks, Bill. There are a few reasons, some of them subjective, but one is behavioral.

First, subjectively, it provides a stable data set for paging, and reduces (in some workflows, significant) paging-induced server round trip time. Sorting (the full process of sort and render, that is) on the client side is much faster for a large data sets -- especially for smaller page sizes, as page render time far out strips in-browser sorting time -- and the main pain point is exactly those situations AIUI. It also reduces server load to a single call per filter change.

Behaviorally, the queue position calculation (which is always a lie, but we know that) is not reliable on paged sets. It depends on the full hold set to get right, numerically (and even then, still, it's a lie). However, we now have enough data (assuming appropriate configuration of the various hold wait estimation intervals) to calculate the estimated hold wait time on the client side, which would be a good enhancement to replace queue position.

Also of note is the fact that printing the full list becomes an O(n) operation, rather than refetching and rerendering. We have specific complaints about that in the hold shelf UI, and if we can move the hold pull list to this API then we'll all be tired of so much (printing) winning.

Bill Erickson (berick) wrote :

Agreed those are nice to have, but they come at a cost. We have a chance to improve initial page load time and we're choosing not to, to speed up sorting and printing.

The new API will certainly make it faster, regardless. And maybe the trade-off is worth it overall. In many (most?) cases it won't matter, but a large holds shelf could be painful to preload. 500 holds on the shelf is not uncommon for us, we have one beast with >2k. If those are outliers we shouldn't be distracted by, that's fine, because we need to do what works best for most sites. I just want to be sure this perspective is considered.

Mike Rylander (mrylander) wrote :


For reference, here's the performance I'm seeing on a set of ~60k holds, where each record has between 200 and 2k holds. The hold shelf has 126 holds on it (I can only click "Don't Print" so many times...). This is all on an under-powered, everything-including-the-db-on-one-vm 2-vcpu test server:

 * 126 holds (on the shelf) load in 1.5 seconds, click to render
 * 200 holds load in under 3 seconds
 * 2k holds load in about 15 seconds

I don't think we're giving up much there, TBH. But have you tested with your data set? It'll be larger in aggregate, and per record/shelf I'm sure. If it's significantly slower than what I'm seeing, I'd be very interested in seeing an explain-analyze of the new core hold listing query!

Setting my numbers on initial load speed aside, though, and particularly for the hold shelf where queue position is really not important -- it's on the shelf, their position is "it's ready, come get it" -- I can see removing the queue position column and directly paging there. The drawbacks are that we can't sort on calculated columns, and printing the hold shelf is /precisely/ a pain point for actual users, even at small libraries, and this puts printing at a significant disadvantage. That said, it's still going to be enormously faster to print, even if we do have to re-fetch the data.

I don't think users are ready to give up queue position on the Record -> View Holds UI, though, do you? That's the one interface where it is sorta related to the context (holds for this record) and might be useful as a sort column.

For the pull list, where the primary use case is actually printing and queue position isn't important (it's de facto "1"), front-loading the data to make sorting and printing the whole set very fast seems ideal, so I'd argue for the way the API is used now be how we make use of it in the pull list.


Bill Erickson (berick) wrote :

Thanks, Mike. Unfortuantely, I don't have anything newer than 2.12 on a production-sized database. I thought about back-porting, but it would mean backporting/reindexing metabib display fields as well, so I can't offer any data at the moment. (Soon, though, soon).

Your timing tests are encouraging. Much better than I anticipated. Given that, if one of the other bug commenters could test on their data and sign off (or at least thumbs up), I'll help take it over the finish line.

Kathy Lussier (klussier) wrote :

Hi Bill,

I asked the MassLNC Development Initiative partners to test this branch with production data. So far, I've heard back from Terran, who was on a test system with GPLS data:

"I tested against a branch with 861 items on the hold shelf. It took 19 seconds to load, but once it was loaded, the sorting worked perfectly, printing was fast, downloading was fast. There was a few second lag page to page, but it still worked far better than the current functionality."

I'm hoping to hear back from one or two others before adding a signoff.

Jason Boyer (jboyer) wrote :

We've been testing this in IN briefly and have come across a fun thing, at least for the holds shelf. The API call used currently to show the holds shelf calls open-ils.circ.captured_holds.id_list.on_shelf.retrieve.authoritative.atomic which uses the alhr source, while the new code's call to open-ils.circ.hold.wide_hash.stream doesn't perform the same tricks to only look at the latest hold for each copy. So when you pull up the holds shelf and there's a popular item on it you can end up with situations like bug 1787590; which probably should have been put here as a comment but we had both forgotten that this branch was installed when that request came in. Anyway, you can see from that screenshot that the same copy is listed twice in the Canceled holds at the top, in addition to the active hold that it is currently on the shelf for (though the active hold is off the screen).

Andrea Neiman (aneiman) on 2018-08-20
Changed in evergreen:
milestone: none → 3.2-beta
Michele Morgan (mmorgan) wrote :

I tested this on a test server with production data and can confirm Jason Boyer's observation where the same item can show up multiple times on the holds shelf due to previously cancelled holds.

On the plus side, the load times did not seem unreasonable, and the sorting capabilities worked as expected - making it easy to sort by "Current Copy" to observe the duplicate problem.

On our test system, we are seeing instances where the holds shelf fails to load, but we were also seeing this prior to loading this patch. Just mentioning it here in case others may be seeing the same thing. I will open a separate bug on that behavior.

tags: added: holds
Mike Rylander (mrylander) wrote :

I think I know where that's coming from. I'll look at that ASAP, it should be a simple bug fix. FWIW, I'm not in favor of that holding up merge. :)

Kathy Lussier (klussier) wrote :

Adding a note that eg.grid.circ.wide_holds.shelf and eg.grid.cat.catalog.wide_holds should be added to config.workstation_setting_type

Bill Erickson (berick) wrote :

Or just leave the persist keys alone so no new settings are required? (May want to delete existing values in this case). Otherwise, we'll have to delete the abandoned WS setting types while adding new ones.

Mike Rylander (mrylander) wrote :

I'm fine with leaving the persist keys alone. I mainly didn't want things to break with existing values, and it wasn't clear how we'd delete those on the browser side, but just once, without adding even more cruft like a "I did the delete" setting or cookie or something.

Bill Erickson (berick) wrote :

I know what you mean.. I guess a DB upgrade to delete existing workstation setting values would work, but since server-stored settings are new in 3.2, only those bravely testing master will have any values to clear. Maybe just a comment in the release notes?

Mike Rylander (mrylander) wrote :

Regarding the canceled holds, am I correct that we only want to see canceled holds if there is no succeeding hold (defined by having a later capture date) on the copy?

Mike Rylander (mrylander) wrote :

I've updated the branch with three commits:

1) Fix a thinko in grid action handler names for cancel/uncancel
2) Provide a "only last hold for copy" option for the wide hold api to address the "duplicate copy" issue
3) Make use of the in-browser cache of holds when switching between All and Clearable lists, as the act of clearing holds can take care of updating the in-browser data.

Kathy Lussier (klussier) wrote :

I'm taking another look at this branch now, but I didn't see any changes made regarding eg.grid.circ.wide_holds.shelf. Do we have a consensus on this?

My concern with keeping the name as eg.grid.circ.holds.shelf with a comment made to the release notes is that the people who need to know this info aren't likely to be the same people who read the release notes. Release note comments are great when it's related to something that needs to be configured globally, because you expect the sys admins performing the upgrade to read those notes. But getting the word out to all circ desk staff that they need to clear some storage settings is a bit trickier.

Mike Rylander (mrylander) wrote :

Kathy, that's correct, I intentionally did not address that.

I'm not sure I followed Bill in his last comment. If a local (in-browser) setting exists for these grids, won't it cause problems regardless of whether it gets pushed to the server side storage or not? I guess I'm not clear on how we can cause the removal of a browser-local setting at upgrade time to avoid breakage.

Bill Erickson (berick) wrote :

You didn't follow it, Mike, because it didn't go anywhere. I was only thinking about the server-stored data (and people running master). I now understand the need to change the persist key.

Kathy Lussier (klussier) wrote :

OK, thanks for the clarification! I can add the sql bits that move eg.grid.circ.wide_holds.shelf and eg.grid.cat.catalog.wide_holds to config.workstation_setting_type and removes the old persist keys.

Changed in evergreen:
assignee: nobody → Kathy Lussier (klussier)
Kathy Lussier (klussier) wrote :

I have a new branch available at http://git.evergreen-ils.org/?p=working/Evergreen.git;a=shortlog;h=refs/heads/user/kmlussier/lp-1712854-hold_ui_speed_improvements that resolves a conflict in hatch.js, rebases the code against master, and adds the new persist settings to server storage.

I have not yet signed off on the code because I'm still encountering problems with the Show Clearable Holds button. I was testing the code on a server with 972 holds on the holds shelf. 872 of the holds are past their shelf-expire dates. A handful of those are also canceled. 100 holds are still not considered clearable.

1. When I click the Clearable holds button, the interface only retrieves the holds that are canceled. The shelf-expired holds should also be displaying.
2. When I click the Clear These Holds button, it proceeds to clear all the clearable holds, even the ones that do not display. However, the interface stops the progress at 771 holds. I don't know if this is a pre-existing issue. Because there is no visible finish to clearing the holds, the buttons to return to all holds is disabled.
3. As far as I can tell, all 872 holds were cleared, even though the visible progress indicator stopped at 771.

I have a screencast showing these steps at https://drive.google.com/file/d/1V-C6Wu3ueszbuPFDPI9ne1lEEDNS18Ff/view

Changed in evergreen:
assignee: Kathy Lussier (klussier) → nobody
Kathy Lussier (klussier) wrote :

I forgot to mention that the three fixes Mike noted in comment #40 appear to work as expected.

Mike Rylander (mrylander) wrote :

Kathy, that looks like a thinko on my part. Testing a fix now...

Mike Rylander (mrylander) wrote :

I've updated the branch with a commit to address that issue.

Mike Rylander (mrylander) wrote :

"That issue" being the clear list not including shelf-expired holds.

Mike Rylander (mrylander) wrote :

I've also added a requested default sort order to each UI: shelf expire time for the shelf interface, and request time for the record holds interface.

Mike Rylander (mrylander) wrote :

Kathy, I was not working against your branch. If you want me to pick those commits to a copy of yours, I can, or you can grab them. Either works for me.


Kathy Lussier (klussier) wrote :

I can work off your branch. It's not a problem. I'll take a look at this later this morning.

Thanks Mike!

Kathy Lussier (klussier) wrote :

I have signed resolved the hatch.js conflict, signed off on Mike's commits, and picked my previous commit that adds the persist settings to server storage. This last commit will need a signoff before it can be merged.

New working branch at http://git.evergreen-ils.org/?p=working/Evergreen.git;a=shortlog;h=refs/heads/user/kmlussier/lp-1712854-hold_ui_speed_improvements_signoff

To answer some of the earlier questions about how long it takes the interface to load with the new code, I was testing the code with a holds shelf that had 972 holds. When I loaded the interface with paging set to 25 holds per page, it takes nearly 19 seconds to load, which is indeed slower than a comparable server that was loading the same shelf list at 25 holds per page.

However, if I upped the holds per page to 100, it takes the same 19 seconds to load, which is considerably faster than loading the same holds shelf at 100 holds per page on the comparable server.

In addition the the improvements in speed when paging and sorting, where I really saw a difference in speed was when I was printing out the holds shelf, both through the Print Full List button and the print full grid option. We have a lot of libraries that print out the clearable holds from this grid when pulling items, and this speed improvement will be much appreciated!

Bill Erickson (berick) on 2018-09-04
Changed in evergreen:
assignee: nobody → Bill Erickson (berick)
Bill Erickson (berick) wrote :

Thanks everyone! Did some more basic tests and confirmed the new workstation setting types are accepting values. Merged to master.

Changed in evergreen:
status: Confirmed → Fix Committed
assignee: Bill Erickson (berick) → nobody
Changed in evergreen:
status: Fix Committed → Fix Released
Jennifer Pringle (jpringle-u) wrote :

Is this a fix that can be included in a 3.1 point release or do the changes prevent it from being backported?

Jeff Davis (jdavis-sitka) wrote :

This fix needs to be included in a 3.1 point release, and I understand that it should backport cleanly, so I have targeted it at 3.1.7. Since it is already marked as Fix Released for the 3.2 series I am not sure whether this bug will show up when the 3.1 RM looks for fixes to include.

Dan Wells (dbw2) on 2018-11-20
Changed in evergreen:
assignee: nobody → Dan Wells (dbw2)
Galen Charlton (gmc) on 2019-01-14
Changed in evergreen:
assignee: Dan Wells (dbw2) → nobody
Galen Charlton (gmc) wrote :

Noting that when this is backported to 3.1.x, the fixes for bug 1792621 and bug 1792429 should be backported as well.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers