2016-02-23 11:33:24 |
Cindia-blue |
bug |
|
|
added bug |
2016-02-23 17:16:47 |
Dariusz Smigiel |
neutron: status |
New |
Incomplete |
|
2016-02-24 09:32:50 |
Cindia-blue |
description |
Create and add one member into loadbalancer pool with healthmonitor:
Then nova stop the member vm, the operating_status of member will keep "online" as below curl response.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in ubuntu14.04. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, VIP, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and add one member accoringly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'offline'.
Below comes the detailed curl response:
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 09:35:11 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in ubuntu14.04. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, VIP, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and add one member accoringly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'offline'.
Below comes the detailed curl response:
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in ubuntu14.04. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'offline'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 09:37:06 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in ubuntu14.04. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'offline'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in ubuntu14.04. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 09:39:11 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in ubuntu14.04. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 09:40:19 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 09:40:42 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer, and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 10:07:00 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer and member VM.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer and member VM. octavia provider.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 10:21:07 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just like with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. project_name is 'demo', user_name is 'demo'. I am using private-subnet for loadbalancer and member VM. octavia provider.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just as it behaves with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. Tenantname is 'demo', username is 'demo'. I am using private-subnet for loadbalancer and member VM. octavia provider.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 14:13:10 |
Cindia-blue |
description |
I am expecting Lbaas v2 healthmonitor will update status of "bad" member just as it behaves with v1. However, operating_status of pool members will not change no matter it is normal or not.
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. Tenantname is 'demo', username is 'demo'. I am using private-subnet for loadbalancer and member VM. octavia provider.
I create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then I nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'. Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
Expectation:
Lbaas v2 healthmonitor will update status of "bad" member just as it behaves with v1. However, operating_status of pool members will not change no matter it is normal or not.
ENV:
My devstack runs in a single node of ubuntu14.04 and uses master branch code, mysql and rabbitmq. Tenantname is 'demo', username is 'demo'. I am using private-subnet for loadbalancer and member VM. octavia provider.
Steps to reproduce:
create a vm from cirros-0.3.4-x86_64-uec image and create one member accordingly into loadbalancer pool with healthmonitor. Then curl to get the statues of loadbalancer, find member status is online. Then nova stop the member mapped VM, curl again and again. Its operating_status of member keeps 'online' instead of 'error'.
Below comes the curl response. No difference before and after pool member VM turns into SHUTOFF since no status change happens ever.
{"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools": [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor": {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name": "", "provisioning_status": "ACTIVE"}, "members": [{"name": "", "provisioning_status": "ACTIVE", "address": "10.0.0.13", "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0", "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id": "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}}} |
|
2016-02-24 23:57:52 |
Cindia-blue |
bug task added |
|
senlin |
|
2016-02-25 01:50:41 |
Yang Yu |
neutron: assignee |
|
Yang Yu (yuyangbj) |
|
2016-02-25 09:51:22 |
Elena Ezhova |
neutron: status |
Incomplete |
Confirmed |
|
2016-02-25 09:51:28 |
Elena Ezhova |
tags |
|
lbaas |
|
2016-05-26 05:40:32 |
Cindia-blue |
neutron: assignee |
Yang Yu (yuyangbj) |
Cindia-blue (miaoxinhuili) |
|
2016-06-05 02:19:09 |
OpenStack Infra |
neutron: status |
Confirmed |
In Progress |
|
2016-06-28 11:08:01 |
OpenStack Infra |
neutron: assignee |
Cindia-blue (miaoxinhuili) |
KaiLi (damonl1) |
|
2016-07-05 18:01:32 |
Armando Migliaccio |
neutron: status |
In Progress |
Incomplete |
|
2016-07-05 18:01:35 |
Armando Migliaccio |
neutron: assignee |
KaiLi (damonl1) |
|
|
2016-07-05 18:01:49 |
Armando Migliaccio |
tags |
lbaas |
lbaas low-hanging-fruit |
|
2016-07-06 04:41:41 |
Cindia-blue |
neutron: assignee |
|
Cindia-blue (miaoxinhuili) |
|
2016-07-08 01:48:56 |
Cindia-blue |
neutron: assignee |
Cindia-blue (miaoxinhuili) |
|
|
2016-07-13 06:35:13 |
Cindia-blue |
neutron: assignee |
|
Cindia-blue (miaoxinhuili) |
|
2016-07-15 07:43:21 |
Damon Li |
neutron: status |
Incomplete |
Opinion |
|
2016-07-15 07:43:27 |
Damon Li |
neutron: status |
Opinion |
Incomplete |
|
2016-07-20 16:25:16 |
OpenStack Infra |
neutron: status |
Incomplete |
In Progress |
|
2016-07-22 03:23:27 |
OpenStack Infra |
neutron: assignee |
Cindia-blue (miaoxinhuili) |
KaiLi (damonl1) |
|
2016-07-27 07:43:43 |
Cindia-blue |
neutron: assignee |
KaiLi (damonl1) |
Cindia-blue (miaoxinhuili) |
|
2016-10-24 18:34:44 |
Armando Migliaccio |
neutron: status |
In Progress |
Won't Fix |
|
2017-01-04 21:55:37 |
Armando Migliaccio |
bug task added |
|
octavia |
|
2017-01-11 15:04:44 |
Andrea Mercanti |
bug |
|
|
added subscriber Andrea Mercanti |
2017-04-20 18:24:02 |
Michael Johnson |
octavia: status |
New |
Incomplete |
|
2017-07-06 09:30:48 |
OpenStack Infra |
octavia: status |
Incomplete |
In Progress |
|
2017-07-06 09:30:48 |
OpenStack Infra |
octavia: assignee |
|
Gary Kotton (garyk) |
|
2017-08-16 08:44:56 |
OpenStack Infra |
octavia: assignee |
Gary Kotton (garyk) |
Nir Magnezi (nmagnezi) |
|
2017-11-18 12:44:43 |
OpenStack Infra |
octavia: assignee |
Nir Magnezi (nmagnezi) |
Carlos Goncalves (cgoncalves) |
|
2017-12-21 00:14:38 |
OpenStack Infra |
tags |
lbaas low-hanging-fruit |
in-stable-pike lbaas low-hanging-fruit |
|
2018-06-18 03:52:29 |
Nobuto Murata |
bug |
|
|
added subscriber Nobuto Murata |
2019-08-12 03:21:47 |
XiaoRuiguo |
bug |
|
|
added subscriber XiaoRuiguo |
2023-03-31 08:14:40 |
Gregory Thiemonge |
octavia: status |
In Progress |
Invalid |
|
2023-03-31 08:14:40 |
Gregory Thiemonge |
octavia: assignee |
Carlos Goncalves (cgoncalves) |
|
|
2023-03-31 08:14:45 |
Gregory Thiemonge |
tags |
in-stable-pike lbaas low-hanging-fruit |
auto-abandon in-stable-pike lbaas low-hanging-fruit |
|