neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is not iterable

Bug #1674156 reported by Francois Deppierraz
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
octavia
Invalid
Undecided
brenda
neutron-lbaas (Ubuntu)
Invalid
Low
Unassigned

Bug Description

Is somebody actually running neutron LBaaSv2 with haproxy on Ubuntu 16.04?

root@controller1:~# dpkg -l neutron-lbaasv2-agent
ii neutron-lbaasv2-agent 2:8.3.0-0ubuntu1 all Neutron is a virtual network service for Openstack - LBaaSv2 agent
root@controller1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@controller1:~#

From /var/log/neutron/neutron-lbaasv2-agent.log:

2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] Logging enabled!
2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] /usr/bin/neutron-lbaasv2-agent version 8.3.0
2017-03-19 20:39:06.702 4528 WARNING oslo_config.cfg [req-9a6a669c-5a5a-4b6c-8c2f-2b1edd9462d9 - - - - -] Option "default_ipv6_subnet_pool" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
2017-03-19 20:39:07.033 4528 ERROR neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] <neutron_lbaas.services.loadbalancer.data_models.LoadBalancer object at 0x7f881b430e90>
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager [-] Unable to deploy instance for loadbalancer: c49473a7-b956-4a5d-8215-703335eb3320
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager Traceback (most recent call last):
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron_lbaas/agent/agent_manager.py", line 185, in _reload_loadbalancer
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager self.device_drivers[driver_name].deploy_instance(loadbalancer)
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager return f(*args, **kwargs)
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 332, in deploy_instance
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if not logical_config or not self._is_active(logical_config):
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 310, in _is_active
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if ('vip' not in logical_config or
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager TypeError: argument of type 'LoadBalancer' is not iterable
2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager

/etc/neutron/neutron_lbaas.conf:
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Looking at the code, I don't see how this can actually works.

/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py +310

    def _is_active(self, logical_config):
        LOG.error(logical_config)
        # haproxy wil be unable to start without any active vip
==> if ('vip' not in logical_config or
                (logical_config['vip']['status'] not in
                 constants.ACTIVE_PENDING_STATUSES) or
                not logical_config['vip']['admin_state_up']):
            return False

/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/data_models.py:

class LoadBalancer(BaseDataModel):

    fields = ['id', 'tenant_id', 'name', 'description', 'vip_subnet_id',
              'vip_port_id', 'vip_address', 'provisioning_status',
              'operating_status', 'admin_state_up', 'vip_port', 'stats',
              'provider', 'listeners', 'pools', 'flavor_id']

    def __init__(self, id=None, tenant_id=None, name=None, description=None,
                 vip_subnet_id=None, vip_port_id=None, vip_address=None,
                 provisioning_status=None, operating_status=None,
                 admin_state_up=None, vip_port=None, stats=None,
                 provider=None, listeners=None, pools=None, flavor_id=None):
        self.id = id
        self.tenant_id = tenant_id
        self.name = name
        self.description = description
        self.vip_subnet_id = vip_subnet_id
        self.vip_port_id = vip_port_id
        self.vip_address = vip_address
        self.operating_status = operating_status
        self.provisioning_status = provisioning_status
        self.admin_state_up = admin_state_up
        self.vip_port = vip_port
        self.stats = stats
        self.provider = provider
        self.listeners = listeners or []
        self.flavor_id = flavor_id
        self.pools = pools or []

Chuck Short (zulcss)
affects: neutron-lbaas (Ubuntu) → neutron
Changed in neutron-lbaas (Ubuntu):
status: New → Triaged
importance: Undecided → Low
Revision history for this message
Akihiro Motoki (amotoki) wrote :

neutron-lbaas is now a separate project "octavia". octavia is in charge of neutron-lbaas project as well.

affects: neutron → octavia
brenda (tian-mingming)
Changed in octavia:
assignee: nobody → brenda (tian-mingming)
Revision history for this message
brenda (tian-mingming) wrote :

It seems not a bug. The reason of this is using a v1 haproxy driver not v2.

Revision history for this message
Michael Johnson (johnsom) wrote :

Marking invalid due to v1 driver.

Changed in octavia:
status: New → Invalid
Changed in neutron-lbaas (Ubuntu):
status: Triaged → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.