Statuses not shown for non-"loadbalancer" LBaaS objects on CLI
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| octavia |
In Progress
|
Medium
|
Unassigned |
Bug Description
There is no indication on the CLI when creating an LBaaSv2 object (other than a "loadbalancer") has failed...
stack@openstack:~$ neutron lbaas-listener-
Created a new listener:
+------
| Field | Value |
+------
| admin_state_up | True |
| connection_limit | -1 |
| default_pool_id | |
| default_
| description | |
| id | 5ca664d6-
| loadbalancers | {"id": "549982d9-
| name | MyListener1 |
| protocol | HTTP |
| protocol_port | 80 |
| sni_container_refs | |
| tenant_id | 22000d943c5341c
+------
There is no indication of any issue here, and lbaas-listener-show produces the same output. However, in reality, the listener is in an error state...
mysql> select * from lbaas_listeners;
+------
| tenant_id | id | name | description | protocol | protocol_port | connection_limit | loadbalancer_id | default_pool_id | admin_state_up | provisioning_status | operating_status | default_
+------
| 22000d943c5341c
+------
1 row in set (0.00 sec)
How is a CLI user who doesn't have access to the Neutron DB supposed to know an error has occurred (other than "it doesn't work", obviously)?
Carl Baldwin (carl-baldwin) wrote : | #1 |
tags: | removed: cli errors lbaasv2 statuses |
Changed in python-neutronclient: | |
assignee: | nobody → Brandon Logan (brandon-logan) |
status: | New → Incomplete |
Elena Ezhova (eezhova) wrote : | #2 |
Operation and provisioning statuses are not included in RESOURCE_
[1] https:/
[2] https:/
Changed in python-neutronclient: | |
status: | Incomplete → Confirmed |
affects: | python-neutronclient → neutron |
Fix proposed to branch: master
Review: https:/
Changed in neutron: | |
assignee: | Brandon Logan (brandon-logan) → Elena Ezhova (eezhova) |
status: | Confirmed → In Progress |
Brandon Logan (brandon-logan) wrote : | #4 |
There is a CLI call to get the statuses of all the LB tree in one call:
neutron lbaas-loadbalan
http://
The explanation of it doesn't actually say it returns the body, but it does. The reason we couldn't just return the statuses on each object is that there were plans on having the ability to share listeners, pools, members across other loadbalancers. A status of a listener is dependent on what loadbalancer it is on, so it doesn't make sense to just show it on the listener.
nlbaas does not have shared listeners in yet, and I'm not sure it ever will. It does however have shared pools support, but they can only be shared within the context of a loadbalancer. This means multiple listeners within a loadbalancer can point to teh same pool. The pool statuses may now depend on what listeners are using it, so the statuses should not be shown on the pool IMHO.
Matthew Geldert (mgeldert-v) wrote : | #5 |
Brandon, what you say about the operating_status makes sense, but I disagree that it makes sense for the provisioning_
When a user creates an object, be it a listener or a pool, the LBaaS driver from the chosen provider is going to perform an action, such as make a REST call to a physical/virtual loadbalancer, which creates a configuration object on that device. If that operation fails then the user should be informed immediately in the response from the "create" call (and it should be visible in subsequent "show" calls). Whether the object was created or not (provisioning_
Once the object is implemented on the loadbalancing device then the provisioning_status can be set to SUCCESS and forgotten; then the operating_status can reflect the health of dependencies if required.
Brandon Logan (brandon-logan) wrote : | #6 |
Provisioning may depend on what the pool has been linked to though. It depends on the driver though. For example, say that a driver communicates with an appliance that creates each listener on its own VM. Now, if a user creates a pool on one listener and then associates that same pool to the other listener then you basically have two different provisioning states for the same pool.
Now it kind of sounds like you're saying that we should show provisioning status only on the create pool call and continue showing it until it's successful? That seems even more confusing than what is currently there, and I wholly admit the current method is confusing. I'm all for simplifying the workflow, but also want to keep in mind the problems that come with sharing entities. It's a tough problem.
As for the listener status, I don't think there are any plans to share listener's across load balancers and as such a provisioning status and even operating status on that might be well received, but I'd rather get other community members thoughts on that.
Matthew Geldert (mgeldert-v) wrote : | #7 |
In your example, yes, there would be two provisioning states for the pool. But is that example likely to occur in the real world? The bind IP is specified in the loadbalancer object, and the bind port is specified in the listener. So if you have separate VMs for each listener then each VM has to raise the same IP address and, typically, you would hit duplicate address issues. I'm not saying it's an insurmountable problem, but the simplest solution (and the one that fits best with all the loadbalancer/ADC products I've ever seen) would have all the listeners for a given loadbalancer (i.e. ports for a given IP) on the same VM (or cluster of VMs).
Carl Baldwin (carl-baldwin) wrote : | #8 |
@Brandon, as the bug deputy for this week, I'm going to leave triaging this to you.
Brandon Logan (brandon-logan) wrote : | #9 |
So we discussed this in yesterday's meeting:
http://
TLDR: a possible solution was to show the scoped statuses in the body of the pool on a GET, so there'd be multiple statuses scoped by listener_id. Sounds like its closer to what you want but not exactly. I think you would prefer it to be just one set of statuses. I prefer that too, but not sure we can get there. Then again, maybe we can just do it and punt by saying its a driver issue if they have different statuses per scope.
Matthew Geldert (mgeldert-v) wrote : | #10 |
I would prefer one set of statuses, and speaking as the author of a driver that only requires one set, I'm all for making it a driver issue if multiple statuses are required :)
Speaking as an end-user, I really just want some indication that the command I just ran failed.
Change abandoned by Elena Ezhova (<email address hidden>) on branch: master
Review: https:/
Changed in neutron: | |
assignee: | Elena Ezhova (eezhova) → nobody |
Changed in neutron: | |
importance: | Undecided → Medium |
affects: | neutron → octavia |
Michael Johnson (johnsom) wrote : | #12 |
Note, this bug is against neutron-lbaas. Octavia has implemented this in the Octavia v2 API and OpenStack client plugin
Just trying to triage here. Is there any way via the cli to query the status of a listener? lbaas-listener-show or something? I'll assign for further triage.