Activity log for bug #1987355

Date Who What changed Old value New value Message
2022-08-22 23:57:00 Jorge Merlino bug added bug
2022-08-23 17:16:35 Jorge Merlino tags sts
2022-09-09 12:42:41 Jorge Merlino description I found this issue when nova calls cinder with an expired X-Auth-Token but it is configured to also send a X-Service-Token. The traffic goes like this: nova-compute -> cinder: post with X-Auth-Token and X-Service-Token cinder -> keystone: validate X-Auth-Token keystone -> cinder: returns 404 cinder -> nova-compute: returns 401 nova-compute -> cinder: retry post with new X-Service-Token cinder -> keystone: validate X-Service-Token keystone -> cinder: returns 200 showing that the token is valid cinder -> nova-compute: returns 401 As I understand Cinder should return 200 in the last message as the token is valid. I traced this problem to lines 671-675 in keystonemiddleware/auth_token/__init__.py. In this case the user_token is invalid but the service_token is valid. Thus the if statement in line 675 is true and a 401 is returned. I also found out that Cinder is expecting the user_token to be always valid as well but I don't know if that is an independent bug or things are not expected to work as I want them to. My test client is a long running service that uses the same token to communicate to nova until it receives a 401 and then generates a new one. Sometimes the token is invalidated in the middle of a transaction and nova returns 200 to the client but cinder returns 401 to nova. I found this issue when nova calls cinder with an expired X-Auth-Token but it is configured to also send a X-Service-Token. The traffic goes like this: nova-compute -> cinder: post with X-Auth-Token and X-Service-Token cinder -> keystone: validate X-Auth-Token keystone -> cinder: returns 404 cinder -> nova-compute: returns 401 nova-compute -> cinder: retry post with new X-Service-Token cinder -> keystone: validate X-Service-Token keystone -> cinder: returns 200 showing that the token is valid cinder -> nova-compute: returns 401 As I understand Cinder should return 200 in the last message as the token is valid. I traced this problem to lines 671-675 in keystonemiddleware/auth_token/__init__.py. In this case the user_token is invalid but the service_token is valid. Thus the if statement in line 675 is true and a 401 is returned. I also found out that Cinder is expecting the user_token to be always valid as well but I don't know if that is an independent bug or things are not expected to work as I want them to. My test client is a long running service that uses the same token to communicate to nova until it receives a 401 and then generates a new one. Sometimes the token is invalidated in the middle of a transaction and nova returns 200 to the client but cinder returns 401 to nova. I have managed to reproduce this both on ussuri and yoga (the code I mentioned has not been changed in 7 years). It happens whether cinder itself has the service tokens enabled or not.
2022-10-05 17:42:33 Jorge Merlino description I found this issue when nova calls cinder with an expired X-Auth-Token but it is configured to also send a X-Service-Token. The traffic goes like this: nova-compute -> cinder: post with X-Auth-Token and X-Service-Token cinder -> keystone: validate X-Auth-Token keystone -> cinder: returns 404 cinder -> nova-compute: returns 401 nova-compute -> cinder: retry post with new X-Service-Token cinder -> keystone: validate X-Service-Token keystone -> cinder: returns 200 showing that the token is valid cinder -> nova-compute: returns 401 As I understand Cinder should return 200 in the last message as the token is valid. I traced this problem to lines 671-675 in keystonemiddleware/auth_token/__init__.py. In this case the user_token is invalid but the service_token is valid. Thus the if statement in line 675 is true and a 401 is returned. I also found out that Cinder is expecting the user_token to be always valid as well but I don't know if that is an independent bug or things are not expected to work as I want them to. My test client is a long running service that uses the same token to communicate to nova until it receives a 401 and then generates a new one. Sometimes the token is invalidated in the middle of a transaction and nova returns 200 to the client but cinder returns 401 to nova. I have managed to reproduce this both on ussuri and yoga (the code I mentioned has not been changed in 7 years). It happens whether cinder itself has the service tokens enabled or not. I found this issue when nova calls cinder with an expired X-Auth-Token but it is configured to also send a X-Service-Token. The traffic goes like this: nova-compute -> cinder: post with X-Auth-Token and X-Service-Token cinder -> keystone: validate X-Auth-Token keystone -> cinder: returns 404 cinder -> nova-compute: returns 401 nova-compute -> cinder: retry post with new X-Service-Token cinder -> keystone: validate X-Service-Token keystone -> cinder: returns 200 showing that the token is valid cinder -> nova-compute: returns 401 As I understand Cinder should return 200 in the last message as the token is valid. My test client is a long running service that uses the same token to communicate to nova until it receives a 401 and then generates a new one. Sometimes the token is invalidated in the middle of a transaction and nova returns 200 to the client but cinder returns 401 to nova. I have managed to reproduce this both on ussuri and yoga (the code I mentioned has not been changed in 7 years).
2022-10-05 17:43:41 OpenStack Infra keystonemiddleware: status New In Progress
2022-10-05 18:39:59 Jorge Merlino keystonemiddleware: assignee Jorge Merlino (jorge-merlino)
2022-11-25 09:44:20 Pawel Kubica bug added subscriber Pawel Kubica
2022-11-25 09:56:28 Wojciech bug added subscriber Wojciech
2022-12-19 13:41:38 OpenStack Infra keystonemiddleware: status In Progress Fix Released
2023-01-27 16:11:43 OpenStack Infra tags sts in-stable-zed sts
2023-02-15 16:06:12 OpenStack Infra tags in-stable-zed sts in-stable-yoga in-stable-zed sts
2023-02-28 17:10:57 OpenStack Infra tags in-stable-yoga in-stable-zed sts in-stable-xena in-stable-yoga in-stable-zed sts
2023-03-01 10:00:35 Chris Valean bug added subscriber Chris Valean
2023-03-02 09:00:23 Gheza bug added subscriber Gheza
2023-06-06 16:57:51 OpenStack Infra tags in-stable-xena in-stable-yoga in-stable-zed sts in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts
2023-06-20 17:10:00 OpenStack Infra tags in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts
2023-08-04 16:11:01 OpenStack Infra tags in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts
2023-08-08 14:16:37 Edward Hope-Morley bug task added python-keystonemiddleware (Ubuntu)
2023-08-08 14:17:08 Edward Hope-Morley nominated for series Ubuntu Jammy
2023-08-08 14:17:08 Edward Hope-Morley bug task added python-keystonemiddleware (Ubuntu Jammy)
2023-08-08 14:17:08 Edward Hope-Morley nominated for series Ubuntu Mantic
2023-08-08 14:17:08 Edward Hope-Morley bug task added python-keystonemiddleware (Ubuntu Mantic)
2023-08-08 14:17:08 Edward Hope-Morley nominated for series Ubuntu Lunar
2023-08-08 14:17:08 Edward Hope-Morley bug task added python-keystonemiddleware (Ubuntu Lunar)
2023-08-08 14:17:08 Edward Hope-Morley nominated for series Ubuntu Focal
2023-08-08 14:17:08 Edward Hope-Morley bug task added python-keystonemiddleware (Ubuntu Focal)
2023-08-08 14:17:19 Edward Hope-Morley bug task added cloud-archive
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/yoga
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/yoga
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/wallaby
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/wallaby
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/victoria
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/victoria
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/zed
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/zed
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/bobcat
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/bobcat
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/antelope
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/antelope
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/ussuri
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/ussuri
2023-08-08 14:17:50 Edward Hope-Morley nominated for series cloud-archive/xena
2023-08-08 14:17:50 Edward Hope-Morley bug task added cloud-archive/xena
2023-08-08 14:18:04 Edward Hope-Morley cloud-archive/bobcat: status New Fix Released
2023-08-08 14:18:17 Edward Hope-Morley python-keystonemiddleware (Ubuntu Mantic): status New Fix Released
2023-08-08 14:20:39 Edward Hope-Morley python-keystonemiddleware (Ubuntu Lunar): status New Fix Released
2023-08-08 14:20:50 Edward Hope-Morley cloud-archive/antelope: status New Fix Released
2023-08-14 20:15:46 Jorge Merlino description I found this issue when nova calls cinder with an expired X-Auth-Token but it is configured to also send a X-Service-Token. The traffic goes like this: nova-compute -> cinder: post with X-Auth-Token and X-Service-Token cinder -> keystone: validate X-Auth-Token keystone -> cinder: returns 404 cinder -> nova-compute: returns 401 nova-compute -> cinder: retry post with new X-Service-Token cinder -> keystone: validate X-Service-Token keystone -> cinder: returns 200 showing that the token is valid cinder -> nova-compute: returns 401 As I understand Cinder should return 200 in the last message as the token is valid. My test client is a long running service that uses the same token to communicate to nova until it receives a 401 and then generates a new one. Sometimes the token is invalidated in the middle of a transaction and nova returns 200 to the client but cinder returns 401 to nova. I have managed to reproduce this both on ussuri and yoga (the code I mentioned has not been changed in 7 years). [Impact] This bug can cause a race condition for long running services that reuse their token (eg. Kubernetes Cinder CSI plugin) when the following occurs: 1 [service] Asks nova to attach a volume to a server 2 ...the user's token expires 3 [service] Asks cinder if the volume has been attached 4 [nova] Asks cinder to attach the volume In step 3 the token is marked as invalid in the cache and step 4 fails even if the token is accompanied by a valid service token. The key step is that step 3 has to happen before step 4 which is not frequent hence the race condition. Also, the client will ask for a new user token if it is not authorized in the calls in steps 1 or 3 but if the token is marked as invalid in step 3 then step 4 fails and the volume becomes stuck in "detaching" status. [Test Plan] It hard to reproduce this bug as it depends on the timing of packets and the token expiration. I was able to reproduce by reducing the token expiration to 60 seconds and running a go script that is constantly attaching and detaching volumes. Even then it may take some time for the bug to occur. [Where problems could occur] The patch removes code that work as an optimization in order to save the time needed for rechecking invalid tokens. So it should not add problems beside the loss of the optimization. The new code will return all tokens from the cache for validation instead of throwing an exception. If the token is actually invalid it will be detected later on.
2023-08-14 20:21:40 Jorge Merlino description [Impact] This bug can cause a race condition for long running services that reuse their token (eg. Kubernetes Cinder CSI plugin) when the following occurs: 1 [service] Asks nova to attach a volume to a server 2 ...the user's token expires 3 [service] Asks cinder if the volume has been attached 4 [nova] Asks cinder to attach the volume In step 3 the token is marked as invalid in the cache and step 4 fails even if the token is accompanied by a valid service token. The key step is that step 3 has to happen before step 4 which is not frequent hence the race condition. Also, the client will ask for a new user token if it is not authorized in the calls in steps 1 or 3 but if the token is marked as invalid in step 3 then step 4 fails and the volume becomes stuck in "detaching" status. [Test Plan] It hard to reproduce this bug as it depends on the timing of packets and the token expiration. I was able to reproduce by reducing the token expiration to 60 seconds and running a go script that is constantly attaching and detaching volumes. Even then it may take some time for the bug to occur. [Where problems could occur] The patch removes code that work as an optimization in order to save the time needed for rechecking invalid tokens. So it should not add problems beside the loss of the optimization. The new code will return all tokens from the cache for validation instead of throwing an exception. If the token is actually invalid it will be detected later on. [Impact] This bug can cause a race condition for long running services that reuse their token (eg. Kubernetes Cinder CSI plugin) when the following occurs: 1 [service] Asks nova to attach a volume to a server 2 ...the user's token expires 3 [service] Asks cinder if the volume has been attached 4 [nova] Asks cinder to attach the volume In step 3 the token is marked as invalid in the cache and step 4 fails even if the token is accompanied by a valid service token. The key step is that step 3 has to happen before step 4 which is not frequent hence the race condition. Also, the client will ask for a new user token if it is not authorized in the calls in steps 1 or 3 but if the token is marked as invalid in step 3 then step 4 fails and the volume becomes stuck in "detaching" status. [Test Plan] It hard to reproduce this bug as it depends on the timing of packets and the token expiration. I was able to reproduce by reducing the token expiration to 60 seconds and running a go script that is constantly attaching and detaching volumes. Even then it may take some time for the bug to occur. The code used is here: https://paste.ubuntu.com/p/CbGNzGxYt9/ The openstack auth information should be set in lines 99-105 and then the script should be called with 3 parameters: the id of a volume and the ids of two servers. The script attaches and detaches the volume between those two servers. [Where problems could occur] The patch removes code that work as an optimization in order to save the time needed for rechecking invalid tokens. So it should not add problems beside the loss of the optimization. The new code will return all tokens from the cache for validation instead of throwing an exception. If the token is actually invalid it will be detected later on.
2023-08-14 21:17:09 Jorge Merlino attachment added lp1987355_jammy.debdiff https://bugs.launchpad.net/keystonemiddleware/+bug/1987355/+attachment/5692252/+files/lp1987355_jammy.debdiff
2023-08-14 21:17:37 Jorge Merlino attachment added lp1987355_zed.debdiff https://bugs.launchpad.net/keystonemiddleware/+bug/1987355/+attachment/5692253/+files/lp1987355_zed.debdiff
2023-08-14 21:18:21 Jorge Merlino cloud-archive/zed: status New In Progress
2023-08-14 21:18:31 Jorge Merlino python-keystonemiddleware (Ubuntu Jammy): status New In Progress
2023-08-14 21:19:30 Jorge Merlino bug added subscriber Support Engineering Sponsors
2023-08-14 21:21:06 Jorge Merlino description [Impact] This bug can cause a race condition for long running services that reuse their token (eg. Kubernetes Cinder CSI plugin) when the following occurs: 1 [service] Asks nova to attach a volume to a server 2 ...the user's token expires 3 [service] Asks cinder if the volume has been attached 4 [nova] Asks cinder to attach the volume In step 3 the token is marked as invalid in the cache and step 4 fails even if the token is accompanied by a valid service token. The key step is that step 3 has to happen before step 4 which is not frequent hence the race condition. Also, the client will ask for a new user token if it is not authorized in the calls in steps 1 or 3 but if the token is marked as invalid in step 3 then step 4 fails and the volume becomes stuck in "detaching" status. [Test Plan] It hard to reproduce this bug as it depends on the timing of packets and the token expiration. I was able to reproduce by reducing the token expiration to 60 seconds and running a go script that is constantly attaching and detaching volumes. Even then it may take some time for the bug to occur. The code used is here: https://paste.ubuntu.com/p/CbGNzGxYt9/ The openstack auth information should be set in lines 99-105 and then the script should be called with 3 parameters: the id of a volume and the ids of two servers. The script attaches and detaches the volume between those two servers. [Where problems could occur] The patch removes code that work as an optimization in order to save the time needed for rechecking invalid tokens. So it should not add problems beside the loss of the optimization. The new code will return all tokens from the cache for validation instead of throwing an exception. If the token is actually invalid it will be detected later on. [Impact] This bug can cause a race condition for long running services that reuse their token (eg. Kubernetes Cinder CSI plugin) when the following occurs: 1 [service] Asks nova to attach a volume to a server 2 ...the user's token expires 3 [service] Asks cinder if the volume has been attached 4 [nova] Asks cinder to attach the volume In step 3 the token is marked as invalid in the cache and step 4 fails even if the token is accompanied by a valid service token. The key is that step 3 has to happen before step 4 which is not frequent hence the race condition. Also, the client will ask for a new user token if it is not authorized in the calls in steps 1 or 3 but if the token is marked as invalid in step 3 then step 4 fails and the volume becomes stuck in "detaching" status. [Test Plan] It hard to reproduce this bug as it depends on the timing of packets and the token expiration. I was able to reproduce by reducing the token expiration to 60 seconds and running a go script that is constantly attaching and detaching volumes. Even then it may take some time for the bug to occur. The code used is here: https://paste.ubuntu.com/p/CbGNzGxYt9/ The openstack auth information should be set in lines 99-105 and then the script should be called with 3 parameters: the id of a volume and the ids of two servers. The script attaches and detaches the volume between those two servers. [Where problems could occur] The patch removes code that work as an optimization in order to save the time needed for rechecking invalid tokens. So it should not add problems beside the loss of the optimization. The new code will return all tokens from the cache for validation instead of throwing an exception. If the token is actually invalid it will be detected later on.
2023-09-05 10:47:32 Edward Hope-Morley summary Error validating X-Service-Token [SRU] Error validating X-Service-Token
2023-09-13 19:07:59 Corey Bryant python-keystonemiddleware (Ubuntu Jammy): importance Undecided High
2023-09-13 19:07:59 Corey Bryant python-keystonemiddleware (Ubuntu Jammy): status In Progress Triaged
2023-09-13 19:08:13 Corey Bryant python-keystonemiddleware (Ubuntu Focal): importance Undecided High
2023-09-13 19:08:13 Corey Bryant python-keystonemiddleware (Ubuntu Focal): status New Triaged
2023-09-13 19:08:23 Corey Bryant cloud-archive/yoga: importance Undecided High
2023-09-13 19:08:23 Corey Bryant cloud-archive/yoga: status New Triaged
2023-09-13 19:08:33 Corey Bryant cloud-archive/xena: importance Undecided High
2023-09-13 19:08:33 Corey Bryant cloud-archive/xena: status New Triaged
2023-09-13 19:08:43 Corey Bryant cloud-archive/wallaby: importance Undecided High
2023-09-13 19:08:43 Corey Bryant cloud-archive/wallaby: status New Triaged
2023-09-13 19:08:53 Corey Bryant cloud-archive/victoria: importance Undecided High
2023-09-13 19:08:53 Corey Bryant cloud-archive/victoria: status New Triaged
2023-09-13 19:09:03 Corey Bryant cloud-archive/ussuri: importance Undecided High
2023-09-13 19:09:03 Corey Bryant cloud-archive/ussuri: status New Triaged
2023-09-13 19:09:17 Corey Bryant cloud-archive/zed: importance Undecided High
2023-09-13 19:09:17 Corey Bryant cloud-archive/zed: status In Progress Triaged
2023-09-13 19:20:01 Corey Bryant bug added subscriber Ubuntu Stable Release Updates Team
2023-09-14 03:47:14 Ubuntu Archive Robot bug added subscriber Corey Bryant
2023-09-15 12:49:21 Timo Aaltonen python-keystonemiddleware (Ubuntu Jammy): status Triaged Fix Committed
2023-09-15 12:49:23 Timo Aaltonen bug added subscriber SRU Verification
2023-09-15 12:49:31 Timo Aaltonen tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-jammy
2023-09-15 12:51:49 Timo Aaltonen python-keystonemiddleware (Ubuntu Focal): status Triaged Fix Committed
2023-09-15 12:51:54 Timo Aaltonen tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-jammy in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy
2023-09-18 14:03:34 Corey Bryant cloud-archive/zed: status Triaged Fix Committed
2023-09-18 14:03:35 Corey Bryant tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-zed-needed
2023-09-18 14:06:40 Corey Bryant cloud-archive/yoga: status Triaged Fix Committed
2023-09-18 14:06:41 Corey Bryant tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-zed-needed in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-yoga-needed verification-zed-needed
2023-09-18 14:07:55 Corey Bryant cloud-archive/xena: status Triaged Fix Committed
2023-09-18 14:07:57 Corey Bryant tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-yoga-needed verification-zed-needed in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-xena-needed verification-yoga-needed verification-zed-needed
2023-09-18 14:08:20 Corey Bryant cloud-archive/wallaby: status Triaged Fix Committed
2023-09-18 14:08:22 Corey Bryant tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-xena-needed verification-yoga-needed verification-zed-needed in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-wallaby-needed verification-xena-needed verification-yoga-needed verification-zed-needed
2023-09-18 14:08:49 Corey Bryant cloud-archive/victoria: status Triaged Fix Committed
2023-09-18 14:08:51 Corey Bryant tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-wallaby-needed verification-xena-needed verification-yoga-needed verification-zed-needed in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-victoria-needed verification-wallaby-needed verification-xena-needed verification-yoga-needed verification-zed-needed
2023-09-18 14:09:40 Corey Bryant cloud-archive/ussuri: status Triaged Fix Committed
2023-09-18 14:09:42 Corey Bryant tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-victoria-needed verification-wallaby-needed verification-xena-needed verification-yoga-needed verification-zed-needed in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-ussuri-needed verification-victoria-needed verification-wallaby-needed verification-xena-needed verification-yoga-needed verification-zed-needed
2023-09-27 18:18:04 Jorge Merlino tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-ussuri-needed verification-victoria-needed verification-wallaby-needed verification-xena-needed verification-yoga-needed verification-zed-needed in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-ussuri-needed verification-victoria-needed verification-wallaby-done verification-xena-done verification-yoga-done verification-zed-done
2023-09-27 21:18:43 Jorge Merlino tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-needed verification-needed-focal verification-needed-jammy verification-ussuri-needed verification-victoria-needed verification-wallaby-done verification-xena-done verification-yoga-done verification-zed-done in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-done-focal verification-done-jammy verification-needed verification-ussuri-done verification-victoria-done verification-wallaby-done verification-xena-done verification-yoga-done verification-zed-done
2023-09-27 21:20:52 Jorge Merlino tags in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-done-focal verification-done-jammy verification-needed verification-ussuri-done verification-victoria-done verification-wallaby-done verification-xena-done verification-yoga-done verification-zed-done in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed sts verification-done-focal verification-done-jammy verification-ussuri-done verification-victoria-done verification-wallaby-done verification-xena-done verification-yoga-done verification-zed-done
2023-09-28 13:23:12 Launchpad Janitor python-keystonemiddleware (Ubuntu Jammy): status Fix Committed Fix Released
2023-09-28 13:23:20 Andreas Hasenack removed subscriber Ubuntu Stable Release Updates Team
2023-09-28 13:23:36 Launchpad Janitor python-keystonemiddleware (Ubuntu Focal): status Fix Committed Fix Released
2023-09-28 17:47:14 Corey Bryant cloud-archive/yoga: status Fix Committed Fix Released
2023-09-28 17:53:14 Corey Bryant cloud-archive/zed: status Fix Committed Fix Released
2023-09-28 17:54:30 Corey Bryant cloud-archive/xena: status Fix Committed Fix Released
2023-09-28 17:56:21 Corey Bryant cloud-archive/wallaby: status Fix Committed Fix Released
2023-09-28 17:57:36 Corey Bryant cloud-archive/victoria: status Fix Committed Fix Released
2023-09-29 11:50:11 Corey Bryant cloud-archive/ussuri: status Fix Committed Fix Released