I have a similar issue with Ussuri also, in my case the ovsdb_timeout is 180 and I'm seeing the following logs on the compute node side every time the OVN Central change the leader because of the snapshot process. It will break the SB connections with the metadata agents and it will do a full sync again after the metadata agent finds the new leader:
May 11 21:05:11 <ommited> neutron-ovn-metadata-agent[1599694]: 2022-05-11 21:05:11.316 1599694 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.2X.3X.25:6642: clustered database server is not cluster leader; trying another server
May 11 21:05:11 <ommited> neutron-ovn-metadata-agent[1599699]: 2022-05-11 21:05:11.318 1599699 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.2X.3X.25:6642: clustered database server is not cluster leader; trying another server
May 11 21:05:11 <ommited> neutron-ovn-metadata-agent[1599694]: 2022-05-11 21:05:11.318 1599694 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.2X.3X.25:6642: connection closed by client
<ommited>
May 11 21:05:21 <ommited> neutron-ovn-metadata-agent[1599679]: 2022-05-11 21:05:21.332 1599679 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:124
May 11 21:05:21 <ommited> neutron-ovn-metadata-agent[1599679]: 2022-05-11 21:05:21.339 1599679 INFO neutron.agent.ovn.metadata.agent [-] Connection to OVSDB established, doing a full sync
May 11 21:05:21 <ommited> neutron-ovn-metadata-agent[1599679]: 2022-05-11 21:05:21.354 1599679 DEBUG neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 46cc279b-a6fc-41d8-b4f9-d161bf7f9ef4 provision_datapath /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:392
Hi all,
I have a similar issue with Ussuri also, in my case the ovsdb_timeout is 180 and I'm seeing the following logs on the compute node side every time the OVN Central change the leader because of the snapshot process. It will break the SB connections with the metadata agents and it will do a full sync again after the metadata agent finds the new leader:
May 11 21:05:11 <ommited> neutron- ovn-metadata- agent[1599694] : 2022-05-11 21:05:11.316 1599694 INFO ovsdbapp. backend. ovs_idl. vlog [-] tcp:10. 2X.3X.25: 6642: clustered database server is not cluster leader; trying another server ovn-metadata- agent[1599699] : 2022-05-11 21:05:11.318 1599699 INFO ovsdbapp. backend. ovs_idl. vlog [-] tcp:10. 2X.3X.25: 6642: clustered database server is not cluster leader; trying another server ovn-metadata- agent[1599694] : 2022-05-11 21:05:11.318 1599694 INFO ovsdbapp. backend. ovs_idl. vlog [-] tcp:10. 2X.3X.25: 6642: connection closed by client ovn-metadata- agent[1599679] : 2022-05-11 21:05:21.332 1599679 DEBUG ovsdbapp. backend. ovs_idl. transaction [-] Transaction caused no change do_commit /usr/lib/ python3/ dist-packages/ ovsdbapp/ backend/ ovs_idl/ transaction. py:124 ovn-metadata- agent[1599679] : 2022-05-11 21:05:21.339 1599679 INFO neutron. agent.ovn. metadata. agent [-] Connection to OVSDB established, doing a full sync ovn-metadata- agent[1599679] : 2022-05-11 21:05:21.354 1599679 DEBUG neutron. agent.ovn. metadata. agent [-] Provisioning metadata for network 46cc279b- a6fc-41d8- b4f9-d161bf7f9e f4 provision_datapath /usr/lib/ python3/ dist-packages/ neutron/ agent/ovn/ metadata/ agent.py: 392
May 11 21:05:11 <ommited> neutron-
May 11 21:05:11 <ommited> neutron-
<ommited>
May 11 21:05:21 <ommited> neutron-
May 11 21:05:21 <ommited> neutron-
May 11 21:05:21 <ommited> neutron-
Does someone have a workaround?
More about the metadata agent process here: /docs.openstack .org/networking -ovn/latest/ contributor/ design/ metadata_ api.html# :~:text= neutron% 2Dovn%2Dmetadat a%2Dagent% 20is%20the% 20process% 20that% 20will, reach%20the% 20appropriate% 20host% 20network.
https:/
Tiago Pires