Hi, I don't have the same behavior but it might be related. ------------------------------------------------------------------ openstack port list | grep octavia | 6fd3e411-0f55-4f57-a49b-8978ac7045be | octavia-health-manager-octavia-0-listen-port | fa:16:3e:4b:c6:48 | ip_address='fc00:bee5:427a:2b79:f816:3eff:fe4b:c648', subnet_id='e7e22722-af55-4b8d-b126-d5cf2e037c0d' | DOWN | | 8878130c-c8a6-44f1-a668-d669b00a8e0d | octavia-health-manager-octavia-2-listen-port | fa:16:3e:bd:2c:83 | ip_address='fc00:bee5:427a:2b79:f816:3eff:febd:2c83', subnet_id='e7e22722-af55-4b8d-b126-d5cf2e037c0d' | DOWN | | fdce70ed-b861-4473-a483-2024b2733c75 | octavia-health-manager-octavia-1-listen-port | fa:16:3e:a0:1d:a2 | ip_address='fc00:bee5:427a:2b79:f816:3eff:fea0:1da2', subnet_id='e7e22722-af55-4b8d-b126-d5cf2e037c0d' | DOWN | ------------------------------------------------------------------ ------------------------------------------------------------------ openstack network agent list|grep lxd | juju-37c2ba-2-lxd-16.maas | OVN Controller agent | juju-37c2ba-2-lxd-16.maas | | :-) | UP | ovn-controller | | juju-37c2ba-0-lxd-18.maas | OVN Controller agent | juju-37c2ba-0-lxd-18.maas | | :-) | UP | ovn-controller | | juju-37c2ba-1-lxd-17.maas | OVN Controller agent | juju-37c2ba-1-lxd-17.maas | | :-) | UP | ovn-controller | ------------------------------------------------------------------ ------------------------------------------------------------------ openstack port show 6fd3e411-0f55-4f57-a49b-8978ac7045be +-------------------------+--------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | juju-37c2ba-2-lxd-16 | | binding_profile | | | binding_vif_details | | | binding_vif_type | binding_failed | | binding_vnic_type | normal | | created_at | 2021-01-15T13:17:51Z | | data_plane_status | None | | description | | | device_id | | | device_owner | neutron:LOADBALANCERV2 | | dns_assignment | None | | dns_domain | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='fc00:bee5:427a:2b79:f816:3eff:fe4b:c648', subnet_id='e7e22722-af55-4b8d-b126-d5cf2e037c0d' | | id | 6fd3e411-0f55-4f57-a49b-8978ac7045be | | ip_allocation | immediate | | mac_address | fa:16:3e:4b:c6:48 | | name | octavia-health-manager-octavia-0-listen-port | | network_id | b3c8ea28-3bde-4785-981b-bee5427a2b79 | | numa_affinity_policy | None | | port_security_enabled | False | | project_id | db0842be9c0d40ad9af73b419dfbe123 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 8 | | security_group_ids | 93d34325-df62-4d2c-90c6-f18cdc42224c | | status | DOWN | | tags | charm-octavia, charm-octavia-octavia-0 | | trunk_details | None | | updated_at | 2021-01-15T13:17:58Z | +-------------------------+--------------------------------------------------------------------------------------------------------+ ------------------------------------------------------------------ hostname -f juju-37c2ba-2-lxd-16.maas ------------------------------------------------------------------ ------------------------------------------------------------------ cat /etc/hosts 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts ------------------------------------------------------------------ Take a look at the Neutron ports created by the Octavia charm, for example `octavia-health-manager-octavia-0-listen-port`: - Does the `binding_host_id` match the FQDN of the Octavia container? As you can see, this is not the case, FQDN is "juju-37c2ba-2-lxd-16.maas" while binding_host_id is "juju-37c2ba-2-lxd-16" (shortname) - Does the `binding_vif_type` field say 'ovs' or does it say 'binding_failed'? As you can see, it says "binding_failed" Here is a snipet of my /var/log/ovn/ovn-controller.log : 2021-01-15T11:03:46.625Z|00001|vlog|INFO|opened log file /var/log/ovn/ovn-controller.log 2021-01-15T11:03:46.627Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2021-01-15T11:03:46.627Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2021-01-15T11:03:46.631Z|00004|main|INFO|OVS IDL reconnected, force recompute. 2021-01-15T11:03:46.631Z|00005|main|INFO|OVNSB IDL reconnected, force recompute. 2021-01-15T11:14:32.214Z|00001|vlog|INFO|opened log file /var/log/ovn/ovn-controller.log 2021-01-15T11:14:32.217Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2021-01-15T11:14:32.217Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2021-01-15T11:14:32.221Z|00004|main|INFO|OVS IDL reconnected, force recompute. 2021-01-15T11:14:32.223Z|00005|reconnect|INFO|ssl:192.168.212.27:6642: connecting... 2021-01-15T11:14:32.223Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. 2021-01-15T11:14:32.231Z|00007|reconnect|INFO|ssl:192.168.212.27:6642: connected 2021-01-15T11:14:32.236Z|00008|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2021-01-15T11:14:32.236Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2021-01-15T11:14:32.237Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2021-01-15T11:14:32.241Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2021-01-15T11:14:32.241Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2021-01-15T11:14:32.250Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2021-01-15T13:17:52.741Z|00011|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:18:32.852Z|00012|binding|INFO|Dropped 16 log messages in last 40 seconds (most recently, 36 seconds ago) due to excessive rate 2021-01-15T13:18:32.852Z|00013|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:19:21.590Z|00014|binding|INFO|Dropped 14 log messages in last 49 seconds (most recently, 48 seconds ago) due to excessive rate 2021-01-15T13:19:21.590Z|00015|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:19:58.590Z|00016|binding|INFO|Dropped 14 log messages in last 37 seconds (most recently, 36 seconds ago) due to excessive rate 2021-01-15T13:19:58.590Z|00017|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:20:35.600Z|00018|binding|INFO|Dropped 14 log messages in last 37 seconds (most recently, 36 seconds ago) due to excessive rate 2021-01-15T13:20:35.600Z|00019|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:21:12.615Z|00020|binding|INFO|Dropped 14 log messages in last 37 seconds (most recently, 36 seconds ago) due to excessive rate 2021-01-15T13:21:12.615Z|00021|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:21:49.620Z|00022|binding|INFO|Dropped 14 log messages in last 37 seconds (most recently, 36 seconds ago) due to excessive rate 2021-01-15T13:21:49.620Z|00023|binding|INFO|Not claiming lport 6fd3e411-0f55-4f57-a49b-8978ac7045be, chassis juju-37c2ba-2-lxd-16.maas requested-chassis juju-37c2ba-2-lxd-16 2021-01-15T13:22:26.625Z|00024|binding|INFO|Dropped 14 log messages in last 37 seconds (most recently, 36 seconds ago) due to excessive rate and it continues to the end of the file with the last 2 lines ("not claiming ...", "Dropped xx log messages ..."), nothing else. So, to summarize, you're saying that the issue is due to the fact that binding_host_id is not using the FQDN ? Would adding an entry in /etc/hosts would fix the issue ? Thanks for your help. Best regards, Walid