When kill(sometines doesn't restart) the ovs switch or restart it in the compute nodes vm conectivity is lost

Bug #1804842 reported by Candido Campos Rivas
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Fix Released
Medium
Slawek Kaplonski

Bug Description

OSP 14

3 controllers + 3 computes + dvr

several vms in one compute with fip.

Problem 1 :

root@compute-2 heat-admin]# systemctl restart openvswitch

fip conectivity with undercloud vm is lost and no recover

conectivity with other computes is lost, but it is recovered restarting neutron openvswitch agent container

root@compute-2 heat-admin]# systemctl restart openvswitch.

Problem 2:

After kill -9 "pid ovs switch"

Sometimes the ovs switch in not restarted automatically

Same problems that in the scenario1

[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
openvsw+ 49054 1 1 13:20 ? 00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --user openvswitch:hugetlbfs --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach
root 49217 17666 0 13:21 pts/0 00:00:00 grep --color=auto ovs
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
openvsw+ 49054 1 0 13:20 ? 00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --user openvswitch:hugetlbfs --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach
root 49421 17666 0 13:21 pts/0 00:00:00 grep --color=auto ovs
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
openvsw+ 49054 1 0 13:20 ? 00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --user openvswitch:hugetlbfs --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach
root 49423 17666 0 13:21 pts/0 00:00:00 grep --color=auto ovs
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]# kill -9 49054
[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
root 49610 17666 0 13:22 pts/0 00:00:00 grep --color=auto ovs
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
root 49628 17666 0 13:22 pts/0 00:00:00 grep --color=auto ovs

[root@compute-2 heat-admin]# date
Fri Nov 23 13:22:22 UTC 2018
[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
root 49788 17666 0 13:22 pts/0 00:00:00 grep --color=auto ovs
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]# ps -ef | grep ovs
root 10558 7292 0 12:09 ? 00:00:01 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpATdioG/privsep.sock
42435 46886 46871 0 13:17 ? 00:00:00 /bin/bash /neutron_ovs_agent_launcher.sh
root 49790 17666 0 13:22 pts/0 00:00:00 grep --color=auto ovs

overcloud) [stack@undercloud-0 ~]$ openstack versions show
+-------------+----------------+---------+------------+-----------------------------------+------------------+------------------+
| Region Name | Service Type | Version | Status | Endpoint | Min Microversion | Max Microversion |
+-------------+----------------+---------+------------+-----------------------------------+------------------+------------------+
| regionOne | block-storage | 2.0 | DEPRECATED | http://10.0.0.101:8776/v2/ | None | None |
| regionOne | block-storage | 3.0 | CURRENT | http://10.0.0.101:8776/v3/ | 3.0 | 3.55 |
| regionOne | placement | None | CURRENT | http://10.0.0.101:8778/placement/ | None | None |
| regionOne | network | 2.0 | CURRENT | http://10.0.0.101:9696/v2.0/ | None | None |
| regionOne | alarm | 2.0 | CURRENT | http://10.0.0.101:8042/v2 | None | None |
| regionOne | cloudformation | 1.0 | CURRENT | http://10.0.0.101:8000/v1/ | None | None |
| regionOne | event | 2.0 | CURRENT | http://10.0.0.101:8977/v2 | None | None |
| regionOne | orchestration | 1.0 | CURRENT | http://10.0.0.101:8004/v1/ | None | None |
| regionOne | object-store | 1.0 | CURRENT | http://10.0.0.101:8080/v1/ | None | None |
| regionOne | compute | 2.0 | SUPPORTED | http://10.0.0.101:8774/v2/ | None | None |
| regionOne | compute | 2.1 | CURRENT | http://10.0.0.101:8774/v2.1/ | 2.1 | 2.65 |
| regionOne | image | 2.0 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.1 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.2 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.3 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.4 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.5 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.6 | SUPPORTED | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | image | 2.7 | CURRENT | http://10.0.0.101:9292/v2/ | None | None |
| regionOne | metric | 1.0 | CURRENT | http://10.0.0.101:8041/v1/ | None | None |
| regionOne | identity | 3.10 | CURRENT | http://10.0.0.101:5000/v3/ | None | None |
+-------------+----------------+---------+------------+-----------------------------------+------------------+------------------+
(overcloud) [stack@undercloud-0 ~]$ cat /etc/re
redhat-lsb/ redhat-release request-key.conf request-key.d/ resolv.conf
(overcloud) [stack@undercloud-0 ~]$ cat /etc/re
redhat-lsb/ redhat-release request-key.conf request-key.d/ resolv.conf
(overcloud) [stack@undercloud-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)

tags: added: ovs
Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

by undercloud/overcloud, do you mean triple-o?

problem 1 means you rebooted openvswitch (ovs-vswitchd/ovsdb-server) without restarting
neutron-ovs-agent? maybe canary flow check is not working well for some reasons. can you provide the log of ovs agent?

problem 2 means you killed ovs-vswitchd? looks unrelated to neutron.

Changed in neutron:
status: New → Incomplete
Revision history for this message
Candido Campos Rivas (ccamposr) wrote :
Download full text (14.1 KiB)

Yes, I refer to tripleo undercloud vm:

(overcloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+------------------------+---------+------------------------------------+--------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------------------------+---------+------------------------------------+--------+--------+
| 7716bfde-d441-4c07-9bb8-b290607a6d4e | selfservice-instance4 | SHUTOFF | selfservice3=10.3.0.16 | cirros | cirros |
| 521d8243-8255-4746-a8d9-5d5d2035a2a1 | selfservice3-instance2 | ACTIVE | selfservice3=10.3.0.8 | cirros | cirros |
| 164dd0d7-cb1e-45ad-abbf-0f8de6dd9f97 | selfservice3-instance | ACTIVE | selfservice3=10.3.0.25 | cirros | cirros |
| 9d21306a-4e6d-4250-9936-9f8ac0895911 | selfservice2-instance5 | ACTIVE | selfservice2=10.2.0.6 | cirros | cirros |
| 05797371-111d-419b-9199-c787f52f0555 | selfservice2-instance3 | ACTIVE | selfservice2=10.2.0.5 | cirros | cirros |
| 6d68a21d-5361-4e37-b0a6-4a9ebfea3e82 | selfservice2-instance2 | ACTIVE | selfservice2=10.2.0.18 | cirros | cirros |
| aa3eed0a-473b-4e71-9c33-23e7aef09f44 | selfservice2-instance | ACTIVE | selfservice2=10.2.0.23, 10.0.0.232 | cirros | cirros |
| d4ad2ca9-21ec-40e6-897d-5ec48e77765f | selfservice-instance3 | ACTIVE | selfservice=10.1.0.5 | cirros | cirros |
| ed68c202-2a57-4078-b459-4e5650adc185 | selfservice-instance2 | ACTIVE | selfservice=10.1.0.20 | cirros | cirros |
| ba63ce4f-f5d6-4634-bd40-868a57c4ecfd | selfservice-instance | ACTIVE | selfservice=10.1.0.12, 10.0.0.234 | cirros | cirros |
+--------------------------------------+------------------------+---------+------------------------------------+--------+--------+
(overcloud) [stack@undercloud-0 ~]$
(overcloud) [stack@undercloud-0 ~]$
(overcloud) [stack@undercloud-0 ~]$ . stackrc
(undercloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+--------------+--------+------------------------+----------------+------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------+--------+------------------------+----------------+------------+
| b71c1bab-8e11-4958-bb36-1556c9711e57 | controller-0 | ACTIVE | ctlplane=192.168.24.11 | overcloud-full | controller |
| c4a66c10-b213-4aaa-b575-b69872fd8d28 | controller-2 | ACTIVE | ctlplane=192.168.24.21 | overcloud-full | controller |
| b0cd4f07-3b9b-4f5f-baea-bbaf8454c6f6 | controller-1 | ACTIVE | ctlplane=192.168.24.7 | overcloud-full | controller |
| 97fb6e74-bc9b-4a9e-b369-895fd1401277 | compute-2 | ACTIVE | ctlplane=192.168.24.12 | overcloud-full | compute |
| 0c4d8ad5-6c0a-4508-98ed-34c263b98960 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute |
| 01a11464-a9ab-4f97-a87f-cf7f1290599c | compute-1 | ACTIVE | ctlplane=192.168.24.9 | overcloud-full | compute |
+----...

Revision history for this message
Candido Campos Rivas (ccamposr) wrote :
Download full text (9.9 KiB)

Reprodcution logs attached:

[heat-admin@compute-2 ~]$ sudo su
[root@compute-2 heat-admin]# date;systemctl restart openvswitch
Mon Nov 26 08:52:14 UTC 2018
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]#
[root@compute-2 heat-admin]# date;docker restart neutron_ovs_agent
Mon Nov 26 08:53:19 UTC 2018
neutron_ovs_agent

Logs for 2 problem when openvswich doesn't restart :

[root@compute-2 heat-admin]# tail -f /var/log/containers/neutron/metadata-agent.log
2018-11-26 08:51:10.883 8224 INFO eventlet.wsgi.server [-] 10.2.0.18,<local> "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 139 time: 0.1934390
2018-11-26 08:51:10.968 8223 INFO eventlet.wsgi.server [-] 10.3.0.25,<local> "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 139 time: 0.1648369
2018-11-26 09:11:20.883 8224 WARNING oslo.messaging._drivers.impl_rabbit [-] Unexpected error during heartbeart thread processing, retrying...: IOError: Socket closed
2018-11-26 09:11:34.307 7137 ERROR oslo.messaging._drivers.impl_rabbit [-] [357cec88-105c-4f2b-b26c-70ae3864ec0f] AMQP server controller-2.internalapi.localdomain:5672 closed the connection. Check login credentials: Socket closed: IOError: Socket closed
2018-11-26 09:11:35.323 7137 INFO oslo.messaging._drivers.impl_rabbit [-] [357cec88-105c-4f2b-b26c-70ae3864ec0f] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 50096.
2018-11-26 09:11:43.938 8223 ERROR oslo.messaging._drivers.impl_rabbit [-] [24405812-662e-4c40-b834-37a07d80366f] AMQP server controller-1.internalapi.localdomain:5672 closed the connection. Check login credentials: Socket closed: IOError: Socket closed
2018-11-26 09:11:44.955 8223 INFO oslo.messaging._drivers.impl_rabbit [-] [24405812-662e-4c40-b834-37a07d80366f] Reconnected to AMQP server on controller-1.internalapi.localdomain:5672 via [amqp] client with port 37994.
2018-11-26 09:11:45.218 8224 ERROR oslo.messaging._drivers.impl_rabbit [-] [7bf56d8a-afcf-4b87-b2dd-c865c2faf08f] AMQP server controller-2.internalapi.localdomain:5672 closed the connection. Check login credentials: Socket closed: IOError: Socket closed
2018-11-26 09:11:46.235 8224 INFO oslo.messaging._drivers.impl_rabbit [-] [7bf56d8a-afcf-4b87-b2dd-c865c2faf08f] Reconnected to AMQP server on controller-2.internalapi.localdomain:5672 via [amqp] client with port 50116.
2018-11-26 09:11:59.117 8223 WARNING oslo.messaging._drivers.impl_rabbit [-] Unexpected error during heartbeart thread processing, retrying...: IOError: Socket closed

vi /var/log/containers/neutron/openvswitch-agent.log

2018-11-26 09:12:04.417 7137 ERROR oslo.messaging._drivers.impl_rabbit [-] [8f9c38d5-3ad8-4cab-8446-a2f07e6d370f] AMQP server controller-1.internalapi.localdomain:5672 closed the connection. Check login credentials: Socket closed: IOError: Socket closed
2018-11-26 09:12:05.444 7137 INFO oslo.messaging._drivers.impl_rabbit [-] [8f9c38d5-3ad8-4cab-8446-a2f07e6d370f] Reconnected to AMQP server on controller-1.internalapi.localdomain:5672 via [amqp] client with port 38042.
2018-11-26 09:12:15.386 8224 WARNING oslo.messaging._drivers.impl_rabbit [-] ...

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

So I can confirm that this issue happens with l2pop mechanism driver and neutron-ovs-agent - it didn't configure properly open flow rules in br-tun thus private vxlan network didn't work - restart of neutron-ovs-agent fixed this.

I think that there is bug in L2pop mechanism driver: after ovs-vswitchd is restarted, it should reconfigure all fdb flows in the same way as it is done when neutron-ovs-agent is restarted but l2 mech_driver don't send all fdb flows to agent as this agent wasn't just restarted.

Changed in neutron:
status: Incomplete → Confirmed
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/620708

Changed in neutron:
assignee: nobody → Slawek Kaplonski (slaweq)
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/620708
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ae031d18866a9e3652f4fc122f120915209a7b29
Submitter: Zuul
Branch: master

commit ae031d18866a9e3652f4fc122f120915209a7b29
Author: Slawek Kaplonski <email address hidden>
Date: Wed Nov 28 22:42:18 2018 +0100

    Force all fdb entries update after ovs-vswitchd restart

    When ovs-vswitchd process is restarted neutron-ovs-agent will
    handle it and reconfigure all ports and openflows in bridges.
    Unfortunatelly when tunnel networks are used together with
    L2pop mechanism driver, this driver will not notice that agent
    lost all openflow config and will not send all fdb entries which
    should be added on host.

    In such case L2pop mechanism driver should behave in same way like
    when neutron-ovs-agent is restarted and send all fdb_entries to
    agent.

    This patch adds "simulate" of agent start flag when ovs_restart is
    handled thus neutron-server will send all fdb_entries to agent and
    tunnels openflow rules can be reconfigured properly.

    Change-Id: I5f1471e20bbad90c4cdcbc6c06d3a4412db55b2a
    Closes-bug: #1804842

Changed in neutron:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.openstack.org/623499

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/623502

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/623503

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/rocky)

Reviewed: https://review.openstack.org/623499
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ae2ef681403d1f103170ea70df1010f006244752
Submitter: Zuul
Branch: stable/rocky

commit ae2ef681403d1f103170ea70df1010f006244752
Author: Slawek Kaplonski <email address hidden>
Date: Wed Nov 28 22:42:18 2018 +0100

    Force all fdb entries update after ovs-vswitchd restart

    When ovs-vswitchd process is restarted neutron-ovs-agent will
    handle it and reconfigure all ports and openflows in bridges.
    Unfortunatelly when tunnel networks are used together with
    L2pop mechanism driver, this driver will not notice that agent
    lost all openflow config and will not send all fdb entries which
    should be added on host.

    In such case L2pop mechanism driver should behave in same way like
    when neutron-ovs-agent is restarted and send all fdb_entries to
    agent.

    This patch adds "simulate" of agent start flag when ovs_restart is
    handled thus neutron-server will send all fdb_entries to agent and
    tunnels openflow rules can be reconfigured properly.

    Change-Id: I5f1471e20bbad90c4cdcbc6c06d3a4412db55b2a
    Closes-bug: #1804842
    (cherry picked from commit ae031d18866a9e3652f4fc122f120915209a7b29)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/623503
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=fccc786fd572e4ce3bfd421ac3b1da7e7f31480b
Submitter: Zuul
Branch: stable/pike

commit fccc786fd572e4ce3bfd421ac3b1da7e7f31480b
Author: Slawek Kaplonski <email address hidden>
Date: Wed Nov 28 22:42:18 2018 +0100

    Force all fdb entries update after ovs-vswitchd restart

    When ovs-vswitchd process is restarted neutron-ovs-agent will
    handle it and reconfigure all ports and openflows in bridges.
    Unfortunatelly when tunnel networks are used together with
    L2pop mechanism driver, this driver will not notice that agent
    lost all openflow config and will not send all fdb entries which
    should be added on host.

    In such case L2pop mechanism driver should behave in same way like
    when neutron-ovs-agent is restarted and send all fdb_entries to
    agent.

    This patch adds "simulate" of agent start flag when ovs_restart is
    handled thus neutron-server will send all fdb_entries to agent and
    tunnels openflow rules can be reconfigured properly.

    Change-Id: I5f1471e20bbad90c4cdcbc6c06d3a4412db55b2a
    Closes-bug: #1804842
    (cherry picked from commit ae031d18866a9e3652f4fc122f120915209a7b29)

tags: added: in-stable-pike
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/queens)

Reviewed: https://review.openstack.org/623502
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=dd6a52529e483829e48e2ed62d10bb98307fd3c9
Submitter: Zuul
Branch: stable/queens

commit dd6a52529e483829e48e2ed62d10bb98307fd3c9
Author: Slawek Kaplonski <email address hidden>
Date: Wed Nov 28 22:42:18 2018 +0100

    Force all fdb entries update after ovs-vswitchd restart

    When ovs-vswitchd process is restarted neutron-ovs-agent will
    handle it and reconfigure all ports and openflows in bridges.
    Unfortunatelly when tunnel networks are used together with
    L2pop mechanism driver, this driver will not notice that agent
    lost all openflow config and will not send all fdb entries which
    should be added on host.

    In such case L2pop mechanism driver should behave in same way like
    when neutron-ovs-agent is restarted and send all fdb_entries to
    agent.

    This patch adds "simulate" of agent start flag when ovs_restart is
    handled thus neutron-server will send all fdb_entries to agent and
    tunnels openflow rules can be reconfigured properly.

    Change-Id: I5f1471e20bbad90c4cdcbc6c06d3a4412db55b2a
    Closes-bug: #1804842
    (cherry picked from commit ae031d18866a9e3652f4fc122f120915209a7b29)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/ocata)

Fix proposed to branch: stable/ocata
Review: https://review.openstack.org/625081

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 14.0.0.0b1

This issue was fixed in the openstack/neutron 14.0.0.0b1 development milestone.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/ocata)

Reviewed: https://review.openstack.org/625081
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=de6bca3ed184a50580689374d789302193cff5e0
Submitter: Zuul
Branch: stable/ocata

commit de6bca3ed184a50580689374d789302193cff5e0
Author: Slawek Kaplonski <email address hidden>
Date: Wed Nov 28 22:42:18 2018 +0100

    Force all fdb entries update after ovs-vswitchd restart

    When ovs-vswitchd process is restarted neutron-ovs-agent will
    handle it and reconfigure all ports and openflows in bridges.
    Unfortunatelly when tunnel networks are used together with
    L2pop mechanism driver, this driver will not notice that agent
    lost all openflow config and will not send all fdb entries which
    should be added on host.

    In such case L2pop mechanism driver should behave in same way like
    when neutron-ovs-agent is restarted and send all fdb_entries to
    agent.

    This patch adds "simulate" of agent start flag when ovs_restart is
    handled thus neutron-server will send all fdb_entries to agent and
    tunnels openflow rules can be reconfigured properly.

    Change-Id: I5f1471e20bbad90c4cdcbc6c06d3a4412db55b2a
    Closes-bug: #1804842
    (cherry picked from commit ae031d18866a9e3652f4fc122f120915209a7b29)
    (cherry picked from commit dd6a52529e483829e48e2ed62d10bb98307fd3c9)

tags: added: in-stable-ocata
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 11.0.7

This issue was fixed in the openstack/neutron 11.0.7 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 13.0.3

This issue was fixed in the openstack/neutron 13.0.3 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 12.0.6

This issue was fixed in the openstack/neutron 12.0.6 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron ocata-eol

This issue was fixed in the openstack/neutron ocata-eol release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.