Comment 0 for bug 1271661

Revision history for this message
Galen Charlton (gmc) wrote :

Attempting to load a large number of bib records into a Vandelay import queue can stall and fail, with no records getting staged into the import queue. The following sort PostgreSQL errors are associated with this failure:

[4272-1] WARNING: out of shared memory
[4272-2] CONTEXT: SQL statement "DROP TABLE _vandelay_tmp_jrows"
[4272-3] #011PL/pgSQL function vandelay.match_set_test_marcxml(integer,text,integer) line 55 at SQL statement
[4272-4] #011PL/pgSQL function vandelay.match_bib_record() line 46 at FOR over SELECT rows
[4273-1] ERROR: out of shared memory
[4273-2] HINT: You might need to increase max_locks_per_transaction.
[4273-3] CONTEXT: SQL statement "DROP TABLE _vandelay_tmp_jrows"
[4273-4] #011PL/pgSQL function vandelay.match_set_test_marcxml(integer,text,integer) line 55 at SQL statement
[4273-5] #011PL/pgSQL function vandelay.match_bib_record() line 46 at FOR over SELECT rows
[4273-6] STATEMENT: INSERT INTO vandelay.queued_bib_record (id,create_time,import_time,marc,queue,bib_source,imported_as,import_error,error_detail,purpose,quality) VALUES (DEFAULT,DEFAULT,DEFAULT,'<record xmlns:xsi="http://www.w3.o

My diagnosis is that one or more locks get consumed each time the _vandelay_tmp_jrows and _vandelay_tmp_qrows temporary tables get created or dropped in the course of record matching by the vandelay.match_bib_record() trigger function. At some point, with a large enough batch of records loaded in one transaction, Pg will run out of space available for tracking table locks:

Here is the part of the Pg documentation that describes the Pg settings that influence the number of locks that can be tracked:

"The shared lock table tracks locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects(e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table..."

Possible solutions:

- use a mechanism other than temporary tables to pass the jrow and qrow data around (e.g., rewrite the affected functions in PL/Perl and use the %_SHARED hash)
- decrease the number of tables touched in each transaction by committing after processing each record in the spool, and write application-side logic to handle failures of individual records

Evergreen: 2.5.1
PostgreSQL: 9.3