Keystone Token Flush job does not complete in HA deployed environment
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Identity (keystone) |
Medium
|
Peter Sabaini | |||
Newton |
Medium
|
Raildo Mascena de Sousa Filho | |||
Ocata |
Medium
|
Raildo Mascena de Sousa Filho | |||
Ubuntu Cloud Archive |
Undecided
|
Jorge Niedbalski | |||
Mitaka |
Medium
|
Jorge Niedbalski | |||
Newton |
Medium
|
Jorge Niedbalski | |||
Ocata |
Medium
|
Jorge Niedbalski | |||
puppet-keystone |
Medium
|
Juan Antonio Osorio Robles | |||
tripleo |
Medium
|
Juan Antonio Osorio Robles | |||
keystone (Ubuntu) |
High
|
Jorge Niedbalski | |||
Xenial |
High
|
Jorge Niedbalski | |||
Yakkety |
High
|
Jorge Niedbalski | |||
Zesty |
High
|
Jorge Niedbalski |
Bug Description
[Impact]
* The Keystone token flush job can get into a state where it will never complete because the transaction size exceeds the mysql galara transaction size - wsrep_max_ws_size (1073741824).
[Test Case]
1. Authenticate many times
2. Observe that keystone token flush job runs (should be a very long time depending on disk) >20 hours in my environment
3. Observe errors in mysql.log indicating a transaction that is too large
Actual results:
Expired tokens are not actually flushed from the database without any errors in keystone.log. Only errors appear in mysql.log.
Expected results:
Expired tokens to be removed from the database
[Additional info:]
It is likely that you can demonstrate this with less than 1 million tokens as the >1 million token table is larger than 13GiB and the max transaction size is 1GiB, my token bench-marking Browbeat job creates more than needed.
Once the token flush job can not complete the token table will never decrease in size and eventually the cloud will run out of disk space.
Furthermore the flush job will consume disk utilization resources. This was demonstrated on slow disks (Single 7.2K SATA disk). On faster disks you will have more capacity to generate tokens, you can then generate the number of tokens to exceed the transaction size even faster.
Log evidence:
[root@overcloud
2016-12-08 01:33:40.530 21614 INFO keystone.
2016-12-09 09:31:25.301 14120 INFO keystone.
2016-12-11 01:35:39.082 4223 INFO keystone.
2016-12-12 01:08:16.170 32575 INFO keystone.
2016-12-13 01:22:18.121 28669 INFO keystone.
[root@overcloud
161208 1:33:41 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592
161208 1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161209 9:31:26 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592
161209 9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161211 1:35:39 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592
161211 1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161212 1:08:16 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592
161212 1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161213 1:22:18 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592
161213 1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2
Disk utilization issue graph is attached. The entire job in that graph takes from the first spike is disk util(~5:18UTC) and culminates in about ~90 minutes of pegging the disk (between 1:09utc to 2:43utc).
[Regression Potential]
* Not identified
Related branches
- Drew Freiberger: Approve on 2017-04-26
- Jill Rouleau (community): Needs Information on 2017-04-26
-
Diff: 52 lines (+46/-0)1 file modifiedbootstack-ops/lp1649616-keystone-manage (+46/-0)
Alex Krzos (akrzos) wrote : | #1 |
Steve Martinelli (stevemar) wrote : | #2 |
Alex Krzos (akrzos) wrote : | #3 |
Hi Steve,
Actually this was not on a production cloud but a performance testing cloud. This bug is likely a duplicate to the one you linked me over to. One major difference is there are no errors in the keystone log file as the previous bug noted though. (but there is still an error in mysql log file)
To answer your questions though (in the context of performance testing)
- Is it possible to run the cron job that flushes tokens more frequently?
I suppose we can do this, to benchmark more "deterministically" we should probably clean the token table out prior to each iteration (357 iterations during this test)
- What is preventing you from moving to fernet tokens? They are not persisted.
I am actually running the same benchmarks with Fernet tokens as well.
As you already pointed out there is several options to address this:
* Increase frequency of token flushing (at expense of the resource consumption demanded by token flushes)
* Switch to Fernet tokens (No need for a flush, but different performance characteristics)
* Increase size of allowed mysql transaction (increase the size of the flush)
Depending upon the workload in the cloud, some options might be better I suppose.
Lance Bragstad (lbragstad) wrote : | #4 |
Alex - is the only difference between this report and https:/
Changed in keystone: | |
importance: | Undecided → Medium |
What should someone do in the case this is hit in production ?
Fernet could be used, but the migration can take some time. What to do in the meantime ?
On the other hand, nobody has hit this in production yet (at least not reported publicly).
Should we mark this as won't fix for now rather than working in a fix that is not useful for anyone now ?
Alex Krzos (akrzos) wrote : | #6 |
Lance,
I reviewed the log from the other bug (1609511) another question here is how come with that cloud the error was logged into keystone.log vs my cloud the error showed up under mysql but was silent in keystone.log. I will re-run on another setup I have to see if this is still the case or if I just missed the fact keystone does log it or something is configured to silence that issue.
Alex Krzos (akrzos) wrote : | #7 |
I reran to confirm on a similar setup but with a slightly better disk layout (RAID 1 Mirror rather than single SATA disk) and the results are the same. I reviewed the Keystone log files and did find the same stack trace as the other bug. So this is logged correctly with the deployment that I am using.
Aside from the tokens never clearing and the job running for almost 24 hours, when this job runs and doesn't complete the cloud still pays the same penalty in disk utilization attempting to remove the expired tokens(Every day forth).
Here is the log output related to the unsuccessful token flush and I will attach another disk io % util on the controller that the database processes are handling the request from the token flush process.
2017-01-06 03:43:05.810 1013422 INFO keystone.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
2017-01-06 04:06:23.540 1013422 ERROR oslo_db.
Alex Krzos (akrzos) wrote : | #8 |
Do note that this is at the "end" of the token flush job. The actual job ran for much longer, but this end part pegs the disk out attempting to commit.
tags: | added: canonical-bootstack |
Peter Sabaini (peter-sabaini) wrote : | #9 |
We seem to be hitting this in production.
Mysql log has:
170309 9:06:23 [Warning] WSREP: transaction size limit (1073774592) exceeded: 1073807360
170309 9:06:23 [ERROR] WSREP: rbr write fail, data_len: 0, 2
The leftover GRA log files contain only DELETE statements for keystone.token afaict.
Keystone log contains:
(oslo_db.
Traceback (most recent call last):
File "/usr/lib/
self.
File "/usr/lib/
dbapi_
File "/usr/lib/
self.
File "/usr/lib/
pkt = self._read_packet()
File "/usr/lib/
packet.
File "/usr/lib/
err.
File "/usr/lib/
_check_
File "/usr/lib/
raise InternalError(
InternalError: (1180, u'Got error 5 during COMMIT')
The token table has ~500k entries.
Lance Bragstad (lbragstad) wrote : | #10 |
What if it were possible to give `keystone-manage token_flush` a batch size as an argument? This will give deployments with massive amounts of expired token the ability to clean them out in smaller batches.
It's certainly more intensive for the operator and really only makes sense for recovery situations like this. Thoughts?
Peter Sabaini (peter-sabaini) wrote : | #11 |
Lance, the token_flush batches fine, this has been introduced with the fix to bug #1188378.
The problem afaict is just that this is done in a single transaction, with the result that this again can break replication.
Fix proposed to branch: master
Review: https:/
Changed in keystone: | |
assignee: | nobody → Peter Sabaini (peter-sabaini) |
status: | New → In Progress |
Related fix proposed to branch: master
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tripleo-heat-templates (master) | #14 |
Related fix proposed to branch: master
Review: https:/
Changed in puppet-keystone: | |
assignee: | nobody → Juan Antonio Osorio Robles (juan-osorio-robles) |
Changed in tripleo: | |
assignee: | nobody → Juan Antonio Osorio Robles (juan-osorio-robles) |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit bce06b24ac94393
Author: Juan Antonio Osorio Robles <email address hidden>
Date: Wed Apr 12 10:09:28 2017 +0300
Run token flush cron job twice a day
Running the token flush once a day is seldom for larger deployments.
This changes the frequency to be twice a day instead of once.
Change-Id: Ia0b0fb42231871
Related-Bug: #1649616
Changed in tripleo: | |
milestone: | none → pike-2 |
Changed in puppet-keystone: | |
status: | New → Triaged |
Changed in tripleo: | |
status: | New → Triaged |
Changed in puppet-keystone: | |
importance: | Undecided → Medium |
Changed in tripleo: | |
importance: | Undecided → Medium |
Related fix proposed to branch: master
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tripleo-heat-templates (master) | #17 |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit 65e643aca2202f0
Author: Juan Antonio Osorio Robles <email address hidden>
Date: Wed Apr 12 14:31:53 2017 +0300
Run token flush cron job hourly by default
Running this job once a day has proven problematic for large
deployments as seen in the bug report. Setting it to run hourly
would be an improvement to the current situation, as the flushes
wouldn't need to process as much data.
Note that this only affects people using UUID as the token provider.
Change-Id: I462e4da2bfdbcb
Related-Bug: #1649616
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit f694b5551f89604
Author: Juan Antonio Osorio Robles <email address hidden>
Date: Tue Apr 18 13:13:27 2017 +0300
Change keystone token flush to run hourly
In a recent commit [1] the keystone token flush cron job was changed to
run twice a day. However, this change was not enough for big
deployments.
After getting some customer feedback and looking at what other projects
are doing [2] [3] [4]. It seems that running this job hourly is the way
to go.
[1] Ia0b0fb42231871
[2] https:/
[3] https:/
[4] https:/
Change-Id: I6ec7ec8111bd93
Related-Bug: #1649616
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to puppet-keystone (stable/ocata) | #19 |
Related fix proposed to branch: stable/ocata
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tripleo-heat-templates (stable/ocata) | #20 |
Related fix proposed to branch: stable/ocata
Review: https:/
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit dc7f81083180eeb
Author: Peter Sabaini <email address hidden>
Date: Thu Apr 6 23:06:29 2017 +0200
Make flushing tokens more robust
Commit token flushes between batches in order to lower resource
consumption and make flushing more robust for replication
Change-Id: I9be37e420353a3
Closes-Bug: #1649616
Changed in keystone: | |
status: | In Progress → Fix Released |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/ocata
commit 90ffc7f6008370e
Author: Juan Antonio Osorio Robles <email address hidden>
Date: Tue Apr 18 13:13:27 2017 +0300
Change keystone token flush to run hourly
In a recent commit [1] the keystone token flush cron job was changed to
run twice a day. However, this change was not enough for big
deployments.
After getting some customer feedback and looking at what other projects
are doing [2] [3] [4]. It seems that running this job hourly is the way
to go.
[1] Ia0b0fb42231871
[2] https:/
[3] https:/
[4] https:/
Conflicts:
manifests/
spec/
spec/
spec/
spec/
(cherry picked from commit f694b5551f89604
Change-Id: I6ec7ec8111bd93
Related-Bug: #1649616
tags: | added: in-stable-ocata |
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tripleo-heat-templates (stable/ocata) | #23 |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/ocata
commit c1fc74c0f3a8ba3
Author: Juan Antonio Osorio Robles <email address hidden>
Date: Wed Apr 12 14:31:53 2017 +0300
Run token flush cron job hourly by default
Running this job once a day has proven problematic for large
deployments as seen in the bug report. Setting it to run hourly
would be an improvement to the current situation, as the flushes
wouldn't need to process as much data.
Note that this only affects people using UUID as the token provider.
Change-Id: I462e4da2bfdbcb
Related-Bug: #1649616
(cherry picked from commit 65e643aca2202f0
tags: | added: sts |
Fix proposed to branch: stable/ocata
Review: https:/
Changed in keystone (Ubuntu Xenial): | |
importance: | Undecided → Medium |
Changed in keystone (Ubuntu Yakkety): | |
importance: | Undecided → Medium |
Changed in keystone (Ubuntu Zesty): | |
importance: | Undecided → Medium |
Changed in keystone (Ubuntu Xenial): | |
status: | New → Triaged |
Changed in keystone (Ubuntu Yakkety): | |
status: | New → Triaged |
Changed in keystone (Ubuntu Zesty): | |
status: | New → Triaged |
Changed in cloud-archive: | |
assignee: | nobody → Jorge Niedbalski (niedbalski) |
status: | New → In Progress |
Changed in keystone (Ubuntu): | |
assignee: | nobody → Jorge Niedbalski (niedbalski) |
importance: | Undecided → High |
status: | New → In Progress |
Changed in keystone (Ubuntu Xenial): | |
assignee: | nobody → Jorge Niedbalski (niedbalski) |
status: | Triaged → In Progress |
Changed in keystone (Ubuntu Yakkety): | |
assignee: | nobody → Jorge Niedbalski (niedbalski) |
importance: | Medium → High |
status: | Triaged → In Progress |
Changed in keystone (Ubuntu Zesty): | |
assignee: | nobody → Jorge Niedbalski (niedbalski) |
importance: | Medium → High |
status: | Triaged → In Progress |
Jorge Niedbalski (niedbalski) wrote : | #25 |
Jorge Niedbalski (niedbalski) wrote : | #26 |
Jorge Niedbalski (niedbalski) wrote : | #27 |
Changed in keystone (Ubuntu Xenial): | |
importance: | Medium → High |
Jorge Niedbalski (niedbalski) wrote : | #28 |
tags: | added: sts-sru-needed |
description: | updated |
James Page (james-page) wrote : | #29 |
Uploaded to xenial-proposed for SRU team review.
James Page (james-page) wrote : | #30 |
Uploaded to yakkety and zesty proposed for SRU team review.
tags: |
added: sts-sru-done removed: sts-sru-needed |
Edward Hope-Morley (hopem) wrote : | #31 |
@slashd this sru is not finished yet so i'll re-instate the -needed tag (that we use for tracking purposes)
tags: |
added: sts-sru-needed removed: sts-sru-done |
tags: |
added: sts-sru-needd removed: sts-sru-needed |
tags: |
added: sts-sru-needed removed: sts-sru-needd |
Changed in tripleo: | |
milestone: | pike-2 → pike-3 |
This issue was fixed in the openstack/keystone 12.0.0.0b2 development milestone.
Hello Alex, or anyone else affected,
Accepted keystone into zesty-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-
Further information regarding the verification process can be found at https:/
Changed in keystone (Ubuntu Zesty): | |
status: | In Progress → Fix Committed |
tags: | added: verification-needed |
James Page (james-page) wrote : | #34 |
Hello Alex, or anyone else affected,
Accepted keystone into ocata-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.
Please help us by testing this new package. To enable the -proposed repository:
sudo add-apt-repository cloud-archive:
sudo apt-get update
Your feedback will aid us getting this update out to other Ubuntu users.
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-
Further information regarding the verification process can be found at https:/
tags: | added: verification-ocata-needed |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/ocata
commit 6074166b29a0546
Author: Peter Sabaini <email address hidden>
Date: Thu Apr 6 23:06:29 2017 +0200
Make flushing tokens more robust
Commit token flushes between batches in order to lower resource
consumption and make flushing more robust for replication
Change-Id: I9be37e420353a3
Closes-Bug: #1649616
(cherry picked from commit dc7f81083180eeb
As part of a recent change in the Stable Release Update verification policy we would like to inform that for a bug to be considered verified for a given release a verification-
Thank you!
Hello Alex, or anyone else affected,
Accepted keystone into yakkety-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
Changed in keystone (Ubuntu Yakkety): | |
status: | In Progress → Fix Committed |
tags: | added: verification-needed-yakkety |
Changed in keystone (Ubuntu Xenial): | |
status: | In Progress → Fix Committed |
tags: | added: verification-needed-xenial |
Andy Whitcroft (apw) wrote : | #38 |
Hello Alex, or anyone else affected,
Accepted keystone into xenial-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
James Page (james-page) wrote : | #39 |
Hello Alex, or anyone else affected,
Accepted keystone into mitaka-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.
Please help us by testing this new package. To enable the -proposed repository:
sudo add-apt-repository cloud-archive:
sudo apt-get update
Your feedback will aid us getting this update out to other Ubuntu users.
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-
Further information regarding the verification process can be found at https:/
tags: | added: verification-mitaka-needed |
James Page (james-page) wrote : | #40 |
Hello Alex, or anyone else affected,
Accepted keystone into newton-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.
Please help us by testing this new package. To enable the -proposed repository:
sudo add-apt-repository cloud-archive:
sudo apt-get update
Your feedback will aid us getting this update out to other Ubuntu users.
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-
Further information regarding the verification process can be found at https:/
tags: | added: verification-newton-needed |
Fix proposed to branch: master
Review: https:/
Chris Halse Rogers (raof) wrote : | #42 |
Is testing for this bug part of the standard test-suite run for the point-release stable update (https:/
If so, please mention it so that this can be released.
If not, please test this so that the point-releases can be released :).
Changed in tripleo: | |
status: | Triaged → In Progress |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: master
commit 0b5c5c03ecb6cd2
Author: Raildo Mascena <email address hidden>
Date: Tue Jul 4 14:10:16 2017 -0300
Fixing flushing tokens workflow
During a backport patch [0] for this fix
it was found some problems in the previous
approach like, It didn't enabled back the
session.
create a new session and commit on it instead of
disable/enable autocommit.
After this, we should backport this change in order
to fix the previous releases, instead of the other
one.
[0] https:/
Change-Id: Ifc024ba0e86bb7
Closes-Bug: #1649616
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to puppet-keystone (stable/newton) | #44 |
Related fix proposed to branch: stable/newton
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tripleo-heat-templates (stable/newton) | #45 |
Related fix proposed to branch: stable/newton
Review: https:/
Fix proposed to branch: stable/ocata
Review: https:/
Fix proposed to branch: stable/newton
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix merged to puppet-keystone (stable/newton) | #48 |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/newton
commit c1bda5f81e35ec5
Author: Juan Antonio Osorio Robles <email address hidden>
Date: Tue Apr 18 13:13:27 2017 +0300
Change keystone token flush to run hourly
In a recent commit [1] the keystone token flush cron job was changed to
run twice a day. However, this change was not enough for big
deployments.
After getting some customer feedback and looking at what other projects
are doing [2] [3] [4]. It seems that running this job hourly is the way
to go.
[1] Ia0b0fb42231871
[2] https:/
[3] https:/
[4] https:/
Conflicts:
manifests/
spec/
spec/
spec/
spec/
(cherry picked from commit f694b5551f89604
Change-Id: I6ec7ec8111bd93
Related-Bug: #1649616
(cherry picked from commit 90ffc7f6008370e
tags: | added: in-stable-newton |
Jorge Niedbalski (niedbalski) wrote : | #49 |
This is the verification that I ran through:
1) juju run --application keystone "sudo test -f /etc/cron.
2) Commented /etc/cron.
3) Killed all the mysql threads
*) SELECT trx_mysql_thread_id as ID, trx_id, trx_query, trx_started, trx_state FROM information_
*) KILL thread_id;
*) KILL CONNECTION thread_id;
4) Manually run the token_flush command
*) sudo -H -u keystone bash -c "/usr/bin/
After running this, provide me the the following details as attachments into this case. (Please connect to the database using the before mentioned method).
5) Re-enable the cronjob, and perform the following queries on the database to check
if there were any locks or stuck transactions.
SHOW open tables WHERE In_use > 0;
SELECT * FROM information_
SELECT * FROM information_
SHOW ENGINE innodb status;
Jorge Niedbalski (niedbalski) wrote : | #50 |
For the case of xenial/mitaka the results for 5) http://
which indeed shows the problem as solved
Marking the verification as done on X/M.
tags: |
added: verification-done-xenial verification-mitaka-done removed: verification-mitaka-needed verification-needed-xenial |
tags: | added: verification-needed-zesty |
Jorge Niedbalski (niedbalski) wrote : | #51 |
Same results for yakkety/newton for 5) http://
Marking the verification as done for Y/N
tags: |
added: verification-done-yakkety verification-newton-done removed: verification-needed-yakkety verification-newton-needed |
Jorge Niedbalski (niedbalski) wrote : | #52 |
Same results for ocata (http://
tags: |
added: verification-done verification-done-zesty verification-ocata-done removed: verification-needed verification-needed-zesty verification-ocata-needed |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/ocata
commit 83fccfaf8dc2ae2
Author: Raildo Mascena <email address hidden>
Date: Tue Jul 4 14:10:16 2017 -0300
Fixing flushing tokens workflow
During a backport patch [0] for this fix
it was found some problems in the previous
approach like, It didn't enabled back the
session.
create a new session and commit on it instead of
disable/enable autocommit.
After this, we should backport this change in order
to fix the previous releases, instead of the other
one.
[0] https:/
Change-Id: Ifc024ba0e86bb7
Closes-Bug: #1649616
(cherry picked from commit 0b5c5c03ecb6cd2
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/newton
commit 058ea4262459be9
Author: Raildo Mascena <email address hidden>
Date: Tue Jul 4 14:10:16 2017 -0300
Fixing flushing tokens workflow
During a backport patch [0] for this fix
it was found some problems in the previous
approach like, It didn't enabled back the
session.
create a new session and commit on it instead of
disable/enable autocommit.
After this, we should backport this change in order
to fix the previous releases, instead of the other
one.
[0] https:/
Change-Id: Ifc024ba0e86bb7
Closes-Bug: #1649616
(cherry picked from commit 0b5c5c03ecb6cd2
Change abandoned by Steve Martinelli (<email address hidden>) on branch: stable/newton
Review: https:/
Reason: see referred patch in previous comment
The verification of the Stable Release Update for keystone has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
Launchpad Janitor (janitor) wrote : | #57 |
This bug was fixed in the package keystone - 2:11.0.2-0ubuntu1
---------------
keystone (2:11.0.2-0ubuntu1) zesty; urgency=medium
[ Jorge Niedbalski ]
* d/p/0001-
between batches in order to lower resource consumption and
make flushing more robust for replication (LP: #1649616).
[ James Page ]
* New upstream stable release for OpenStack Ocata (LP: #1696139).
-- James Page <email address hidden> Wed, 07 Jun 2017 16:01:45 +0100
Changed in keystone (Ubuntu Zesty): | |
status: | Fix Committed → Fix Released |
Launchpad Janitor (janitor) wrote : | #58 |
This bug was fixed in the package keystone - 2:10.0.1-0ubuntu2
---------------
keystone (2:10.0.1-0ubuntu2) yakkety; urgency=high
* d/p/0001-
between batches in order to lower resource consumption and
make flushing more robust for replication (LP: #1649616).
-- Jorge Niedbalski <email address hidden> Wed, 07 Jun 2017 13:07:50 +0100
Changed in keystone (Ubuntu Yakkety): | |
status: | Fix Committed → Fix Released |
Launchpad Janitor (janitor) wrote : | #59 |
This bug was fixed in the package keystone - 2:9.3.0-0ubuntu2
---------------
keystone (2:9.3.0-0ubuntu2) xenial; urgency=high
* d/p/0001-
token flushes between batches in order to lower resource
consumption and make flushing more robust for replication
(LP: #1649616).
-- Jorge Niedbalski <email address hidden> Wed, 07 Jun 2017 10:33:50 +0100
Changed in keystone (Ubuntu Xenial): | |
status: | Fix Committed → Fix Released |
James Page (james-page) wrote : | #60 |
The verification of the Stable Release Update for keystone has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
James Page (james-page) wrote : | #61 |
This bug was fixed in the package keystone - 2:11.0.
---------------
keystone (2:11.0.
.
* New upstream release for the Ubuntu Cloud Archive.
.
keystone (2:11.0.2-0ubuntu1) zesty; urgency=medium
.
[ Jorge Niedbalski ]
* d/p/0001-
between batches in order to lower resource consumption and
make flushing more robust for replication (LP: #1649616).
.
[ James Page ]
* New upstream stable release for OpenStack Ocata (LP: #1696139).
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tripleo-heat-templates (stable/newton) | #62 |
Reviewed: https:/
Committed: https:/
Submitter: Jenkins
Branch: stable/newton
commit 6299a169b4c17c4
Author: Harry Rybacki <email address hidden>
Date: Wed Jul 12 13:25:30 2017 +0000
Run token flush cron job hourly by default
Running this job once a day has proven problematic for large
deployments as seen in the bug report. Setting it to run hourly
would be an improvement to the current situation, as the flushes
wouldn't need to process as much data.
Note that this only affects people using UUID as the token provider.
Change-Id: I462e4da2bfdbcb
Related-Bug: #1649616
(cherry picked from commit 65e643aca2202f0
The verification of the Stable Release Update for keystone has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
James Page (james-page) wrote : | #64 |
This bug was fixed in the package keystone - 2:9.3.0-
---------------
keystone (2:9.3.
.
* New update for the Ubuntu Cloud Archive.
.
keystone (2:9.3.0-0ubuntu2) xenial; urgency=high
.
* d/p/0001-
token flushes between batches in order to lower resource
consumption and make flushing more robust for replication
(LP: #1649616).
James Page (james-page) wrote : | #65 |
The verification of the Stable Release Update for keystone has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
James Page (james-page) wrote : | #66 |
This bug was fixed in the package keystone - 2:10.0.
---------------
keystone (2:10.0.
.
* New update for the Ubuntu Cloud Archive.
.
keystone (2:10.0.1-0ubuntu2) yakkety; urgency=high
.
* d/p/0001-
between batches in order to lower resource consumption and
make flushing more robust for replication (LP: #1649616).
This issue was fixed in the openstack/keystone 11.0.3 release.
This issue was fixed in the openstack/keystone 10.0.3 release.
This issue was fixed in the openstack/keystone 12.0.0.0b3 development milestone.
Changed in tripleo: | |
milestone: | pike-3 → pike-rc1 |
Changed in tripleo: | |
milestone: | pike-rc1 → queens-1 |
Changed in tripleo: | |
status: | In Progress → Fix Released |
tags: |
added: sts-sru-done removed: sts-sru-needed |
Changed in keystone (Ubuntu): | |
status: | In Progress → Invalid |
Changed in cloud-archive: | |
status: | In Progress → Invalid |
Changed in puppet-keystone: | |
status: | Triaged → Fix Released |
Morgan Fainberg (mdrnstm) wrote : | #71 |
newton is EOL
Hi Alex,
So someone finally hit bug https:/ /bugs.launchpad .net/keystone/ +bug/1609511 in a production environment.
A few questions so I gauge the severity properly.
- Is it possible to run the cron job that flushes tokens more frequently?
- What is preventing you from moving to fernet tokens? They are not persisted.
Lastly, Sam Leong had a patch for this issue, it's available here: https:/ /review. openstack. org/#/c/ 351428/
Unfortunately, Sam has stopped working on keystone these days.