Z39.50 will split the total cap between locations when multiple locations selected.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Evergreen |
Fix Released
|
Low
|
Unassigned |
Bug Description
When selecting multiple locations in Z39.50, the locations will each receive an allocation of the cap which can cause less results when some locations have 0 items.
For example, default cap of 10. Location A has 15 items. Location B has 0 items. Results display 5 of 15. (Rather than 10 of 15)
If Location A has 15 items, Location B has 1 item. Results display 6 of 16 items.
Add in a third location, and the cap is divided up further.
Location A has 15 items. Location B has 1 item. Location C has 0 item. Results display 5 of 16 items. (It allocated 4 slots per location).
And so on.
I think Z39.50 should be a bit smarter and be able to at least fill the cap so if there are more than 10 combined items, it should show 10.
tags: | added: bitesize |
Changed in evergreen: | |
assignee: | Dan Pearl (dpearl) → nobody |
milestone: | none → 2.next |
Changed in evergreen: | |
status: | Fix Committed → Fix Released |
no longer affects: | evergreen/2.5 |
no longer affects: | evergreen/2.6 |
> I think Z39.50 should be a bit smarter and be able to at least fill the
> cap so if there are more than 10 combined items, it should show 10.
For reference, this limit is hard-coded in z3950.js, and the division
between multiple services also happens client-side. To do what Steve
wants efficiently, more of this would probably need to be moved to the
middle layer. I could then imagine the middle layer oversearching
(with an increased limit) each service, and then filling up to the
overall limit by including the excess results from the services that
have some (maybe round-robin style so that one particular service
isn't favored).
Perhaps in the short term a site should just increase it's overall
limit from 10 to something higher, if it's more results they're
looking for. These results are including full MARC data, however,
which is part of the reason why the number is set low. There may be a
performance impact from going too high, which is why Steve's idea may
be a good one (let the Evergreen server bear the initial net I/O
burden, with the client just getting X results however distributed).