2024-05-29 03:00:12 |
Hoyoun Lee |
bug |
|
|
added bug |
2024-05-29 04:02:37 |
Hoyoun Lee |
description |
By mistake, I sent wrong request with duplicated ip, port compbination through the Batch Update Members API.
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
For example :
{
"members": [
{
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}, {
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}
]
}
After the request, the status of Loadbalancer does not change from PENDING_UPDATE.
When checking the source code, there is no logic to check for duplicates.
In the controller logic(member.py), members are classified into new_members/updated_members/deleted_member, but the new_members data is being passed as is with duplicates, so this is suspected to be the cause of the problem. |
By mistake, I sent wrong request with duplicated ip, port compbination through the Batch Update Members API(ver 2023.1).
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
For example :
{
"members": [
{
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}, {
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}
]
}
After the request, the status of Loadbalancer does not change from PENDING_UPDATE.
When checking the source code, there is no logic to check for duplicates.
In the controller logic(member.py), members are classified into new_members/updated_members/deleted_member, but the new_members data is being passed as is with duplicates, so this is suspected to be the cause of the problem. |
|
2024-05-29 04:03:14 |
Hoyoun Lee |
summary |
Loadbalancers is stuck with PENDING_UPDATE state on member update API |
Loadbalancer is stuck with PENDING_UPDATE state on member update API |
|
2024-05-29 04:48:42 |
Hoyoun Lee |
description |
By mistake, I sent wrong request with duplicated ip, port compbination through the Batch Update Members API(ver 2023.1).
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
For example :
{
"members": [
{
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}, {
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}
]
}
After the request, the status of Loadbalancer does not change from PENDING_UPDATE.
When checking the source code, there is no logic to check for duplicates.
In the controller logic(member.py), members are classified into new_members/updated_members/deleted_member, but the new_members data is being passed as is with duplicates, so this is suspected to be the cause of the problem. |
By mistake, I sent wrong request with duplicated ip, port compbination through the Batch Update Members API(ver 2023.1).
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
For example :
192.0.2.16:80 Member already exists, and request data like follows
{
"members": [
{
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}, {
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}
]
}
After the request, the status of Loadbalancer does not change from PENDING_UPDATE.
When checking the source code, there is no logic to check for duplicates.
In the controller logic(member.py), members are classified into new_members/updated_members/deleted_member, but the updated_members data is being passed as is with duplicates, so this is suspected to be the cause of the problem.
## log : 33fe25ab-5477-4787-a8e1-f657376b0ead is duplicated
May 29 04:14:32 ubuntu octavia-worker[123317]: INFO octavia.controller.queue.v2.endpoints [-] Batch updating members: old='[]', new='[]', updated='['825dbebc-da79-4f88-bf48-0e3e63a09d90', '33fe25ab-5477-4787-a8e1-f657376b0ead', '33fe25ab-5477-4787-a8e1-f657376b0ead']'...
May 29 04:14:32 ubuntu octavia-worker[123317]: ERROR oslo_messaging.rpc.server [-] Exception during message handling: taskflow.exceptions.Duplicate: Atoms with duplicate names found: ['octavia-mark-member-active-indb-33fe25ab-5477-4787-a8e1-f657376b0ead']
FYI, There is validation logic for new_members. |
|
2024-05-29 06:15:39 |
Gregory Thiemonge |
octavia: importance |
Undecided |
Medium |
|
2024-05-29 06:15:39 |
Gregory Thiemonge |
octavia: status |
New |
In Progress |
|
2024-05-29 06:15:39 |
Gregory Thiemonge |
octavia: assignee |
|
Gregory Thiemonge (gthiemonge) |
|
2024-07-02 10:14:59 |
Edward Hope-Morley |
bug task added |
|
cloud-archive |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
nominated for series |
|
cloud-archive/yoga |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
bug task added |
|
cloud-archive/yoga |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
nominated for series |
|
cloud-archive/caracal |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
bug task added |
|
cloud-archive/caracal |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
nominated for series |
|
cloud-archive/zed |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
bug task added |
|
cloud-archive/zed |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
nominated for series |
|
cloud-archive/bobcat |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
bug task added |
|
cloud-archive/bobcat |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
nominated for series |
|
cloud-archive/antelope |
|
2024-07-02 10:15:59 |
Edward Hope-Morley |
bug task added |
|
cloud-archive/antelope |
|
2024-07-02 10:16:14 |
Edward Hope-Morley |
bug task added |
|
octavia (Ubuntu) |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
nominated for series |
|
Ubuntu Noble |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
bug task added |
|
octavia (Ubuntu Noble) |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
nominated for series |
|
Ubuntu Oracular |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
bug task added |
|
octavia (Ubuntu Oracular) |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
nominated for series |
|
Ubuntu Mantic |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
bug task added |
|
octavia (Ubuntu Mantic) |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
nominated for series |
|
Ubuntu Jammy |
|
2024-07-02 10:16:28 |
Edward Hope-Morley |
bug task added |
|
octavia (Ubuntu Jammy) |
|
2024-07-05 05:41:26 |
Hua Zhang |
description |
By mistake, I sent wrong request with duplicated ip, port compbination through the Batch Update Members API(ver 2023.1).
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
For example :
192.0.2.16:80 Member already exists, and request data like follows
{
"members": [
{
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}, {
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}
]
}
After the request, the status of Loadbalancer does not change from PENDING_UPDATE.
When checking the source code, there is no logic to check for duplicates.
In the controller logic(member.py), members are classified into new_members/updated_members/deleted_member, but the updated_members data is being passed as is with duplicates, so this is suspected to be the cause of the problem.
## log : 33fe25ab-5477-4787-a8e1-f657376b0ead is duplicated
May 29 04:14:32 ubuntu octavia-worker[123317]: INFO octavia.controller.queue.v2.endpoints [-] Batch updating members: old='[]', new='[]', updated='['825dbebc-da79-4f88-bf48-0e3e63a09d90', '33fe25ab-5477-4787-a8e1-f657376b0ead', '33fe25ab-5477-4787-a8e1-f657376b0ead']'...
May 29 04:14:32 ubuntu octavia-worker[123317]: ERROR oslo_messaging.rpc.server [-] Exception during message handling: taskflow.exceptions.Duplicate: Atoms with duplicate names found: ['octavia-mark-member-active-indb-33fe25ab-5477-4787-a8e1-f657376b0ead']
FYI, There is validation logic for new_members. |
[Impact]
Loadbalancer is stuck with PENDING_UPDATE state on batch member update API.
[Test Case]
Please refer to [Test steps] section below.
[Regression Potential]
The fix is already in the upstream main, stable/2024.1, stable/2023.2, stable/2023.1 branches, so it is a clean backport and might be helpful for deployments using octavia.
I also test this fix, it works well - https://paste.ubuntu.com/p/wPy7pB3SR6/ and https://paste.ubuntu.com/p/zpPDScQCtK/
and I also test debdiff for this fix, it works well - https://paste.ubuntu.com/p/nS6c3QYRGn/
[Others]
Original Bug Description Below
===========
By mistake, I sent wrong request with duplicated ip, port compbination through the Batch Update Members API(ver 2023.1).
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
For example :
192.0.2.16:80 Member already exists, and request data like follows
{
"members": [
{
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}, {
"subnet_id": "xxxxxxx",
"address": "192.0.2.16",
"protocol_port": 80
}
]
}
After the request, the status of Loadbalancer does not change from PENDING_UPDATE.
When checking the source code, there is no logic to check for duplicates.
In the controller logic(member.py), members are classified into new_members/updated_members/deleted_member, but the updated_members data is being passed as is with duplicates, so this is suspected to be the cause of the problem.
## log : 33fe25ab-5477-4787-a8e1-f657376b0ead is duplicated
May 29 04:14:32 ubuntu octavia-worker[123317]: INFO octavia.controller.queue.v2.endpoints [-] Batch updating members: old='[]', new='[]', updated='['825dbebc-da79-4f88-bf48-0e3e63a09d90', '33fe25ab-5477-4787-a8e1-f657376b0ead', '33fe25ab-5477-4787-a8e1-f657376b0ead']'...
May 29 04:14:32 ubuntu octavia-worker[123317]: ERROR oslo_messaging.rpc.server [-] Exception during message handling: taskflow.exceptions.Duplicate: Atoms with duplicate names found: ['octavia-mark-member-active-indb-33fe25ab-5477-4787-a8e1-f657376b0ead']
FYI, There is validation logic for new_members.
[Test steps]
1, set up a openstack env with octavia deployment
2, create a test lb
3, add a member into lb pool
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.226 --protocol-port 80 lb1-pool
$ openstack loadbalancer member list lb1-pool |grep ACTIVE
| b36bb21e-8eed-40bc-a1cb-e69da070c0b9 | | 4f1016d73ae245fe8c5c6a637930f3d2 | ACTIVE | 192.168.21.226 | 80 | ONLINE | 1 |
3, run test.py (https://paste.ubuntu.com/p/38vPW5R5S8/) to call batch member update API to add the same member (eg: 192.168.21.226 above)
4, then we will reproduce the problem, lb will be stuck with PENDING_UPDATE state.
$ openstack loadbalancer member list lb1-pool |grep 192
| b36bb21e-8eed-40bc-a1cb-e69da070c0b9 | | 4f1016d73ae245fe8c5c6a637930f3d2 | PENDING_UPDATE | 192.168.21.226 | 80 | ONLINE | 40 |
5, This is error log I saw - https://paste.ubuntu.com/p/K5s7knNmWw/
[Some Analyses]
You can see some analysis from the bugs I created earlier - https://bugs.launchpad.net/octavia/+bug/2070348 |
|
2024-07-05 05:41:38 |
Hua Zhang |
summary |
Loadbalancer is stuck with PENDING_UPDATE state on member update API |
[SRU] Loadbalancer is stuck with PENDING_UPDATE state on member update API |
|
2024-07-05 05:43:46 |
Hua Zhang |
bug |
|
|
added subscriber Support Engineering Sponsors |
2024-07-05 05:44:05 |
Hua Zhang |
tags |
|
sts |
|
2024-07-05 05:45:04 |
Hua Zhang |
attachment added |
|
noble.debdiff https://bugs.launchpad.net/octavia/+bug/2067441/+attachment/5794826/+files/noble.debdiff |
|
2024-07-05 05:45:52 |
Hua Zhang |
attachment added |
|
mantic.debdiff https://bugs.launchpad.net/octavia/+bug/2067441/+attachment/5794827/+files/mantic.debdiff |
|
2024-07-05 05:46:41 |
Hua Zhang |
attachment added |
|
jammy.debdiff https://bugs.launchpad.net/octavia/+bug/2067441/+attachment/5794828/+files/jammy.debdiff |
|
2024-07-05 05:47:13 |
Hua Zhang |
attachment added |
|
antelope.debdiff https://bugs.launchpad.net/octavia/+bug/2067441/+attachment/5794829/+files/antelope.debdiff |
|
2024-07-05 08:31:35 |
Ubuntu Foundations Team Bug Bot |
tags |
sts |
patch sts |
|
2024-07-05 08:31:43 |
Ubuntu Foundations Team Bug Bot |
bug |
|
|
added subscriber Ubuntu Sponsors |