Comment 4 for bug 1738768

Revision history for this message
Daniel Alvarez (dalvarezs) wrote :

@Lujin:

>> (1) could you please provide which version of Neutron you are using? master branch I guess?
It's not latest master branch but latest promoted packages in RDO:

openstack-tripleo-common-8.1.1-0.20171130034833.0e92cba.el7.centos.noarch
openstack-tripleo-puppet-elements-8.0.0-0.20171127180031.cc2c715.el7.centos.noarch
openstack-tripleo-ui-8.0.1-0.20171129193834.1e42711.el7.centos.noarch
openstack-tripleo-common-containers-8.1.1-0.20171130034833.0e92cba.el7.centos.noarch
openstack-tripleo-validations-8.0.1-0.20171129140336.c1f2069.el7.centos.noarch
openstack-tripleo-heat-templates-8.0.0-0.20171130031741.4df242c.el7.centos.noarch
openstack-tripleo-image-elements-8.0.0-0.20171118092222.90b9a25.el7.centos.noarch
openstack-kolla-5.0.0-0.20171107075441.61495b1.el7.centos.noarch

()[root@overcloud-controller-2 /]# rpm -qa | grep neutron
python-neutron-12.0.0-0.20171206144209.1ca38a1.el7.centos.noarch
python-neutron-lbaas-12.0.0-0.20171206032035.0c76484.el7.centos.noarch
openstack-neutron-lbaas-12.0.0-0.20171206032035.0c76484.el7.centos.noarch
python2-neutronclient-6.5.0-0.20171023215239.355983d.el7.centos.noarch
openstack-neutron-common-12.0.0-0.20171206144209.1ca38a1.el7.centos.noarch
python-neutron-fwaas-12.0.0-0.20171206094459.b5b4491.el7.centos.noarch
openstack-neutron-fwaas-12.0.0-0.20171206094459.b5b4491.el7.centos.noarch
openstack-neutron-ml2-12.0.0-0.20171206144209.1ca38a1.el7.centos.noarch
python2-neutron-lib-1.11.0-0.20171129185804.ff5ee17.el7.centos.noarch
openstack-neutron-12.0.0-0.20171206144209.1ca38a1.el7.centos.noarch

>> (2) i think you forgot to paste the references you mentioned in #1
Right:

[0] https://bugs.launchpad.net/kolla/+bug/1616268
[1] https://bugs.launchpad.net/tripleo/+bug/1734333
[2] https://github.com/openstack/tripleo-heat-templates/commit/2e3a91f58bb48d4e7ab88258fbd704975cf1c79c

>> (3) from what you described, you stopped all 3 containers running l3 agent, which means you do not have any running l3 agents now, shouldn't this lead to dataplane downtime for sure?

In non-containerized environments, if everything is up and running and you stop l3 agents, dataplane remains working (namespaces are still there, ports are connected, flows installed, etc.). Obviously you'll lose control plane for L3 but that's expected. The scenario I'm describing is different since dataplane is lost as well which IMO it's a regression.

Thanks,
Daniel