Improved method for adding new bib records to test dataset
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Evergreen |
New
|
Undecided
|
Unassigned |
Bug Description
As was discovered during the 2.12 beta release, adding new bibliographic records to Evergreen's sample dataset can shift the assignment of copy ids in a way to breaks our live tests.
Although we found a short-term solution for the records that were added during the 2.12 release, we need a more robust, long-term solution that allows us to easily add records to the dataset without affecting the live tests that have already been written.
We started a discussion here - http://
Below are some potential solutions that were raised:
- Move away from the random generation of volumes/copies and create a fully hard coded test set.
- Move away from relying on Database IDs in the live tests
- Skipping the marcxml_import table (insert directly like the auth_concerto.sql file does) and putting the various "assets" lines after the appropriate "bibs" line in the load_all script
I'm opening up this bug so that we can discuss the above options and any other ideas.
Adding a note that sample actor.usr records also need to be part of this discussion as the same problem occurs when they are added to the test dataset. At this time, there only appears to be one PgTAP live test that depends on this data, but this may change over time.