The timestamp updates causes considerable CPU load. By watching top and pg_stat_activity, I was able to catch PID 51544 in the act. The actions of the PID are: root@infr002:/var/log/postgresql# grep 51544 postgresql-9.5-ha.log 2019-01-28 01:20:13 UTC [51544-1] maas@maasdb ERROR: could not serialize access due to concurrent update 2019-01-28 01:20:13 UTC [51544-2] maas@maasdb STATEMENT: UPDATE "maasserver_node" SET "updated" = '2019-01-28T01:19:59.158232'::timestamp, "status_expires" = NULL WHERE "maasserver_node"."id" = 1 2019-01-28 01:20:23 UTC [51544-3] maas@maasdb ERROR: could not serialize access due to concurrent update 2019-01-28 01:20:23 UTC [51544-4] maas@maasdb STATEMENT: UPDATE "maasserver_node" SET "updated" = '2019-01-28T01:20:13.954158'::timestamp, "status_expires" = NULL WHERE "maasserver_node"."id" = 1 2019-01-28 01:20:33 UTC [51544-5] maas@maasdb ERROR: could not serialize access due to concurrent update 2019-01-28 01:20:33 UTC [51544-6] maas@maasdb STATEMENT: UPDATE "maasserver_node" SET "updated" = '2019-01-28T01:20:23.316807'::timestamp, "status_expires" = NULL WHERE "maasserver_node"."id" = 1 2019-01-28 01:20:47 UTC [51544-7] maas@maasdb ERROR: could not serialize access due to concurrent update 2019-01-28 01:20:47 UTC [51544-8] maas@maasdb STATEMENT: UPDATE "maasserver_service" SET "updated" = '2019-01-28T01:20:33.513096'::timestamp, "status_info" = '50% connected to region controllers.' WHERE "maasserver_service"."id" = 24 2019-01-28 01:20:55 UTC [51544-9] maas@maasdb ERROR: could not serialize access due to concurrent update 2019-01-28 01:20:55 UTC [51544-10] maas@maasdb STATEMENT: UPDATE "maasserver_node" SET "updated" = '2019-01-28T01:20:47.634756'::timestamp, "status_expires" = NULL WHERE "maasserver_node"."id" = 1 2019-01-28 01:21:43 UTC [51544-11] maas@maasdb ERROR: could not serialize access due to concurrent update 2019-01-28 01:21:43 UTC [51544-12] maas@maasdb STATEMENT: UPDATE "maasserver_service" SET "updated" = '2019-01-28T01:20:56.272271'::timestamp, "status_info" = '33% connected to region controllers.' WHERE "maasserver_service"."id" = 29 2019-01-28 01:23:45 UTC [52492-15] maas@maasdb DETAIL: Process 52492 waits for ShareLock on transaction 107337590; blocked by process 51544. Process 51544 waits for ExclusiveLock on tuple (74745,15) of relation 16829 of database 16385; blocked by process 52486. Process 51544: UPDATE "maasserver_node" SET "managing_process_id" = NULL WHERE "maasserver_node"."id" IN (3) 2019-01-28 01:24:04 UTC [52486-15] maas@maasdb DETAIL: Process 52486 waits for ShareLock on transaction 107337590; blocked by process 51544. Process 51544 waits for ExclusiveLock on tuple (74745,15) of relation 16829 of database 16385; blocked by process 54211. Process 51544: UPDATE "maasserver_node" SET "managing_process_id" = NULL WHERE "maasserver_node"."id" IN (3) 2019-01-28 01:24:30 UTC [52489-18] maas@maasdb DETAIL: Process 52489 waits for ShareLock on transaction 107337590; blocked by process 51544. Process 51544 waits for ShareLock on transaction 107337670; blocked by process 52489. Process 51544: UPDATE "maasserver_node" SET "managing_process_id" = NULL WHERE "maasserver_node"."id" IN (3) 2019-01-28 03:29:53 UTC [39286-2] maas@maasdb STATEMENT: UPDATE "maasserver_notification" SET "created" = '2019-01-28T03:27:32.295353'::timestamp, "updated" = '2019-01-28T03:29:52.515445'::timestamp, "ident" = 'clusters', "user_id" = NULL, "users" = false, "admins" = true, "message" = '3 rack controllers are not yet connected to the region. Visit the rack controllers page for more information.', "context" = '{}', "category" = 'error' WHERE "maasserver_notification"."id" = 36470 2019-01-28 11:32:40 UTC [32348-6] maas@maasdb STATEMENT: UPDATE "maasserver_node" SET "updated" = '2019-01-28T11:32:32.951544'::timestamp, "status_expires" = NULL WHERE "maasserver_node"."id" = 3 2019-01-29 00:59:25 UTC [51544-1] postgres@[unknown] ERROR: requested WAL segment 000000010000002100000042 has already been removed There are a number of such workers that beat on the CPU constantly and cause other processes to starve.