This was probably due to other actions running and waiting for the cluster or node locks to be released. Previously if a lot of actions for the same cluster or node were started, they would all wait for the locks to become available. This slowed down the engine a lot.
Recently we changed the API implementation to reject requests when a cluster / node is already locked or has another conflicting action in progress, so this problem should not be happening anymore.
This was probably due to other actions running and waiting for the cluster or node locks to be released. Previously if a lot of actions for the same cluster or node were started, they would all wait for the locks to become available. This slowed down the engine a lot.
Recently we changed the API implementation to reject requests when a cluster / node is already locked or has another conflicting action in progress, so this problem should not be happening anymore.