More accurate copy counts on metarecord search results page
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Evergreen |
New
|
Medium
|
Unassigned |
Bug Description
This bug was discovered while working on bug 1629108, which reroutes metarecord searches through the standard search code path. From comment #4 on the bug (with typos cleaned up):
The one area where numbers don't add up correctly is in the copy counts on the metarecord result screen and then the list of copies available when you click Show More Details on the metarecord result page. The displayed copy counts are for copies attached to all records in the MR group, not just the ones that match the limiter.
For example, the screenshot at https:/
From what I understand, there is nothing unapi.holdings_xml now that filters, probably because records that don't match the filter typically wouldn't display on the search results page.
Also adding Mike Rylander's suggestion from bug 1629108 for addressing this issue:
Today, the search code returns a field containing the constituent bib if only one matches the requirements of the search. If there are more than one, that field is null. That's used to facilitate jumping straight through to the bib from the result screen.
I think to address this we'll probably need to add a field to the result that contains the list of relevant constituent bibs, and then pass this back somehow to unapi.mmr_ holdings_ xml code when getting the XML of the holdings. While that stored proc is already "special" and there's no real worry about making its arguments unique, the unapi.mmr stored proc is meant to match the other real-object unapi calls and it's parameters really shouldn't change.
To get that information back to the unapi code we could probably use a specially formatted string passed via the includes parameter. That would let us avoid changing the function signature, and should be fairly straight forward to use on the perl side. While "includes" is usually used to specify the types of objects we should embed, I think it could be useful to invent a general purpose encoding for specifying both the type and the specific IDs to include. I'm pretty sure that could be built in a backward-compatible way.