Another potential solution was to use the PostgresTimeoutTracer to prevent runaway requests, to which this blocking situation might apply as well. So, enabling a timeout of 60 seconds and reproducing the problem, there were five blocked processes:
After waiting the specified timeout, only one process was remaining. However, that particular process was causing the appserver to cease responding. Note that this solution is just a workaround because a request blocked on UPDATE is not necessarily the same as a runaway request. The difference is that the former is just waiting whereas the other is assumed to be performing a request much longer than expected.
Another potential solution was to use the PostgresTimeout Tracer to prevent runaway requests, to which this blocking situation might apply as well. So, enabling a timeout of 60 seconds and reproducing the problem, there were five blocked processes:
postgres 21427 0.0 0.6 42280 5064 ? Ss 19:59 0:00 postgres: cr3 certify_trunk_main [local] UPDATE waiting
After waiting the specified timeout, only one process was remaining. However, that particular process was causing the appserver to cease responding. Note that this solution is just a workaround because a request blocked on UPDATE is not necessarily the same as a runaway request. The difference is that the former is just waiting whereas the other is assumed to be performing a request much longer than expected.