Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

Bug #1585250 reported by Matthew Geldert
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
octavia
Invalid
Medium
Unassigned

Bug Description

There is no indication on the CLI when creating an LBaaSv2 object (other than a "loadbalancer") has failed...

stack@openstack:~$ neutron lbaas-listener-create --name MyListener1 --loadbalancer MyLB1 --protocol HTTP --protocol-port 80
Created a new listener:
+---------------------------+------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------+
| admin_state_up | True |
| connection_limit | -1 |
| default_pool_id | |
| default_tls_container_ref | |
| description | |
| id | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 |
| loadbalancers | {"id": "549982d9-7f52-48ac-a4fe-a905c872d71d"} |
| name | MyListener1 |
| protocol | HTTP |
| protocol_port | 80 |
| sni_container_refs | |
| tenant_id | 22000d943c5341cd88d27bd39a4ee9cd |
+---------------------------+------------------------------------------------+

There is no indication of any issue here, and lbaas-listener-show produces the same output. However, in reality, the listener is in an error state...

mysql> select * from lbaas_listeners;
+----------------------------------+--------------------------------------+-------------+-------------+----------+---------------+------------------+--------------------------------------+-----------------+----------------+---------------------+------------------+--------------------------+
| tenant_id | id | name | description | protocol | protocol_port | connection_limit | loadbalancer_id | default_pool_id | admin_state_up | provisioning_status | operating_status | default_tls_container_id |
+----------------------------------+--------------------------------------+-------------+-------------+----------+---------------+------------------+--------------------------------------+-----------------+----------------+---------------------+------------------+--------------------------+
| 22000d943c5341cd88d27bd39a4ee9cd | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 | MyListener1 | | HTTP | 80 | -1 | 549982d9-7f52-48ac-a4fe-a905c872d71d | NULL | 1 | ERROR | OFFLINE | NULL |
+----------------------------------+--------------------------------------+-------------+-------------+----------+---------------+------------------+--------------------------------------+-----------------+----------------+---------------------+------------------+--------------------------+
1 row in set (0.00 sec)

How is a CLI user who doesn't have access to the Neutron DB supposed to know an error has occurred (other than "it doesn't work", obviously)?

Revision history for this message
Carl Baldwin (carl-baldwin) wrote :

Just trying to triage here. Is there any way via the cli to query the status of a listener? lbaas-listener-show or something? I'll assign for further triage.

tags: removed: cli errors lbaasv2 statuses
Changed in python-neutronclient:
assignee: nobody → Brandon Logan (brandon-logan)
status: New → Incomplete
Revision history for this message
Elena Ezhova (eezhova) wrote :

Operation and provisioning statuses are not included in RESOURCE_ATTRIBUTE_MAP [1] and not get included in api_dict [2]. If this is fixed these attrs would be returned by API and displayed in CLI.

[1] https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/extensions/loadbalancerv2.py#L191-L243
[2] https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/data_models.py#L664

Changed in python-neutronclient:
status: Incomplete → Confirmed
Elena Ezhova (eezhova)
affects: python-neutronclient → neutron
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron-lbaas (master)

Fix proposed to branch: master
Review: https://review.openstack.org/320893

Changed in neutron:
assignee: Brandon Logan (brandon-logan) → Elena Ezhova (eezhova)
status: Confirmed → In Progress
Revision history for this message
Brandon Logan (brandon-logan) wrote :

There is a CLI call to get the statuses of all the LB tree in one call:

neutron lbaas-loadbalancer-status

http://docs.openstack.org/cli-reference/neutron.html

The explanation of it doesn't actually say it returns the body, but it does. The reason we couldn't just return the statuses on each object is that there were plans on having the ability to share listeners, pools, members across other loadbalancers. A status of a listener is dependent on what loadbalancer it is on, so it doesn't make sense to just show it on the listener.

nlbaas does not have shared listeners in yet, and I'm not sure it ever will. It does however have shared pools support, but they can only be shared within the context of a loadbalancer. This means multiple listeners within a loadbalancer can point to teh same pool. The pool statuses may now depend on what listeners are using it, so the statuses should not be shown on the pool IMHO.

Revision history for this message
Matthew Geldert (mgeldert-v) wrote :

Brandon, what you say about the operating_status makes sense, but I disagree that it makes sense for the provisioning_status.

When a user creates an object, be it a listener or a pool, the LBaaS driver from the chosen provider is going to perform an action, such as make a REST call to a physical/virtual loadbalancer, which creates a configuration object on that device. If that operation fails then the user should be informed immediately in the response from the "create" call (and it should be visible in subsequent "show" calls). Whether the object was created or not (provisioning_status) does not depend on the health of dependencies; only the operating_status does.

Once the object is implemented on the loadbalancing device then the provisioning_status can be set to SUCCESS and forgotten; then the operating_status can reflect the health of dependencies if required.

Revision history for this message
Brandon Logan (brandon-logan) wrote :

Provisioning may depend on what the pool has been linked to though. It depends on the driver though. For example, say that a driver communicates with an appliance that creates each listener on its own VM. Now, if a user creates a pool on one listener and then associates that same pool to the other listener then you basically have two different provisioning states for the same pool.

Now it kind of sounds like you're saying that we should show provisioning status only on the create pool call and continue showing it until it's successful? That seems even more confusing than what is currently there, and I wholly admit the current method is confusing. I'm all for simplifying the workflow, but also want to keep in mind the problems that come with sharing entities. It's a tough problem.

As for the listener status, I don't think there are any plans to share listener's across load balancers and as such a provisioning status and even operating status on that might be well received, but I'd rather get other community members thoughts on that.

Revision history for this message
Matthew Geldert (mgeldert-v) wrote :

In your example, yes, there would be two provisioning states for the pool. But is that example likely to occur in the real world? The bind IP is specified in the loadbalancer object, and the bind port is specified in the listener. So if you have separate VMs for each listener then each VM has to raise the same IP address and, typically, you would hit duplicate address issues. I'm not saying it's an insurmountable problem, but the simplest solution (and the one that fits best with all the loadbalancer/ADC products I've ever seen) would have all the listeners for a given loadbalancer (i.e. ports for a given IP) on the same VM (or cluster of VMs).

Revision history for this message
Carl Baldwin (carl-baldwin) wrote :

@Brandon, as the bug deputy for this week, I'm going to leave triaging this to you.

Revision history for this message
Brandon Logan (brandon-logan) wrote :

So we discussed this in yesterday's meeting:

http://eavesdrop.openstack.org/meetings/octavia/2016/octavia.2016-05-25-20.00.log.html

TLDR: a possible solution was to show the scoped statuses in the body of the pool on a GET, so there'd be multiple statuses scoped by listener_id. Sounds like its closer to what you want but not exactly. I think you would prefer it to be just one set of statuses. I prefer that too, but not sure we can get there. Then again, maybe we can just do it and punt by saying its a driver issue if they have different statuses per scope.

Revision history for this message
Matthew Geldert (mgeldert-v) wrote :

I would prefer one set of statuses, and speaking as the author of a driver that only requires one set, I'm all for making it a driver issue if multiple statuses are required :)

Speaking as an end-user, I really just want some indication that the command I just ran failed.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron-lbaas (master)

Change abandoned by Elena Ezhova (<email address hidden>) on branch: master
Review: https://review.openstack.org/320893

Elena Ezhova (eezhova)
Changed in neutron:
assignee: Elena Ezhova (eezhova) → nobody
Changed in neutron:
importance: Undecided → Medium
affects: neutron → octavia
Revision history for this message
Michael Johnson (johnsom) wrote :

Note, this bug is against neutron-lbaas. Octavia has implemented this in the Octavia v2 API and OpenStack client plugin

Revision history for this message
Gregory Thiemonge (gthiemonge) wrote : auto-abandon-script

Abandoned after re-enabling the Octavia launchpad.

Changed in octavia:
status: In Progress → Invalid
tags: added: auto-abandon
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.