keystone.token.backends.sql uses a single delete command to flush expired tokens causing replication lag and potential deadlocks
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Identity (keystone) |
Fix Released
|
Medium
|
Clayton O'Neill |
Bug Description
def flush_expired_
session = self.get_session()
query = session.
query = query.filter(
This will basically result in this command being run (for MySQL anyway):
DELETE FROM token WHERE expires < '2013-06-05 21:02:00';
If there are millions of rows to delete, this will take a long time, which in turn will translate into replication lag if there is an active replication slave.
Also, because the keystone tokens are somewhat random 64 byte strings, there will be gaps in the InnoDB key storage. In order to preserve transactional integrity while deleting these rows, InnoDB has to lock these gaps. So new tokens that fit into these gaps will have to wait for the entire delete to finish to be inserted.
A much more healthy approach is to walk through the table deleting in small batches:
q = session.
while(session.
pass
Changed in keystone: | |
importance: | Undecided → Medium |
status: | New → Triaged |
Changed in keystone: | |
assignee: | nobody → Clayton O'Neill (clayton-oneill) |
status: | Triaged → In Progress |
Changed in keystone: | |
milestone: | none → juno-3 |
status: | Fix Committed → Fix Released |
Changed in keystone: | |
milestone: | juno-3 → 2014.2 |
Fix proposed to branch: master /review. openstack. org/32044
Review: https:/