Z39.50 will split the total cap between locations when multiple locations selected.

Bug #827442 reported by Steve Callender on 2011-08-16
This bug affects 2 people
Affects Status Importance Assigned to Milestone

Bug Description

When selecting multiple locations in Z39.50, the locations will each receive an allocation of the cap which can cause less results when some locations have 0 items.

For example, default cap of 10. Location A has 15 items. Location B has 0 items. Results display 5 of 15. (Rather than 10 of 15)

If Location A has 15 items, Location B has 1 item. Results display 6 of 16 items.

Add in a third location, and the cap is divided up further.

Location A has 15 items. Location B has 1 item. Location C has 0 item. Results display 5 of 16 items. (It allocated 4 slots per location).

And so on.

I think Z39.50 should be a bit smarter and be able to at least fill the cap so if there are more than 10 combined items, it should show 10.

> I think Z39.50 should be a bit smarter and be able to at least fill the
> cap so if there are more than 10 combined items, it should show 10.

For reference, this limit is hard-coded in z3950.js, and the division
between multiple services also happens client-side. To do what Steve
wants efficiently, more of this would probably need to be moved to the
middle layer. I could then imagine the middle layer oversearching
(with an increased limit) each service, and then filling up to the
overall limit by including the excess results from the services that
have some (maybe round-robin style so that one particular service
isn't favored).

Perhaps in the short term a site should just increase it's overall
limit from 10 to something higher, if it's more results they're
looking for. These results are including full MARC data, however,
which is part of the reason why the number is set low. There may be a
performance impact from going too high, which is why Steve's idea may
be a good one (let the Evergreen server bear the initial net I/O
burden, with the client just getting X results however distributed).

Michael Peters (mrpeters) wrote :

Tested and confirmed in master (2506f44).

Changed in evergreen:
status: New → Confirmed
importance: Undecided → Low
Galen Charlton (gmc) on 2013-03-29
tags: added: bitesize
Dan Pearl (dpearl) wrote :

I have a fix for this which is being tested in my consortium before submission. Work on this also revealed some other anomalies with respect to proper display of the records; I have addressed these as well.

Changed in evergreen:
assignee: nobody → Dan Pearl (dpearl)
Dan Pearl (dpearl) wrote :


addresses this problem. I used the oversampling method that Jason suggested as a possibility.

tags: added: pullrequest
Tim Spindler (tspindler-cwmars) wrote :

We have tested Dan Pearl's code on our training server with production data and it has been performing as designed.

Kathy Lussier (klussier) on 2014-04-08
Changed in evergreen:
assignee: Dan Pearl (dpearl) → nobody
milestone: none → 2.next
Jason Stephenson (jstephenson) wrote :

I had a look at Dan's code yesterday. It doesn't seem to do anything wrong to me, but it does change things.

Without his code, I actually got 12 results on the first go for Harry Potter when searching three targets. When I hit load more results, the software said it was displaying 23, but I really only had 22.

With Dan's code, I was getting 10 and then 10 more with the above. What Z39.50 said was displaying and what actually displayed were now in sync.

It also split the ten up in a reasonable manner, 3-3-4 from my three targets. It always loaded 4 from the same target in that scenario.

Trying with more targets, results were similar with an even split, when possible.

I'm not sure if this code actually fixes the reported problem, however. I could not find any searches displaying the characteristics as reported. I always either got 100s of results or less than 10.

Jason Stephenson (jstephenson) wrote :

After talking it over with Ben Shum in IRC, we decided to push this branch to master. It does look like it makes the spread of copies from different targets more reasonable and it also fixes issues with the number of records retrieved and displayed.

I leave the question of backporting to 2.6 and/or 2.5 to the respective branch maintainers.

Changed in evergreen:
status: Confirmed → Fix Committed
milestone: 2.next → 2.7.0-alpha1
Changed in evergreen:
status: Fix Committed → Fix Released
no longer affects: evergreen/2.5
no longer affects: evergreen/2.6
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers