token_flush can hang if lots of tokens
Bug #1387401 reported by
Brant Knudson
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Identity (keystone) |
Fix Released
|
Medium
|
Brant Knudson | ||
Juno |
Fix Released
|
Medium
|
Brant Knudson |
Bug Description
If you've got a system that can generate lots of tokens, token_flush can hang. For DB2, this happens if you create > 100 tokens in a second (for mysql it's 1000 tokens in a second). The query to get the time to delete returns the 100th timestamp which is the same as the min timestamp, and then it goes to delete < min timestamp and none match, so none are deleted, then it gets stuck in a loop since the function always returns the min timestamp.
This could be fixed easily by using <= rather than < for the deletion comparison.
Changed in keystone: | |
assignee: | nobody → Brant Knudson (blk-u) |
tags: | added: juno-backport-potential |
Changed in keystone: | |
importance: | Undecided → Medium |
tags: | removed: juno-backport-potential |
Changed in keystone: | |
milestone: | none → kilo-1 |
Changed in keystone: | |
status: | Fix Committed → Fix Released |
Changed in keystone: | |
milestone: | kilo-1 → 2015.1.0 |
To post a comment you must log in.
Fix proposed to branch: master /review. openstack. org/131899
Review: https:/