Functional deployment of Wallaby version on Focal, with OVN network. All Instances work fine, OVN has DVR enabled. With DVR enabled or disabled, i have the same behavior for Octavia. I have deployed Octavia in LXD as per the documentation, juju status excerpt: octavia/10* blocked idle 1/lxd/11 10.118.0.151 9876/tcp Virtual network for access to Amphorae is down octavia-mysql-router/2* active idle 10.118.0.151 Unit is ready octavia-ovn-chassis/2* active idle 10.118.0.151 Unit is ready This is with Octavia Charmers Next 112, with version 34 I get the unit ready, load balancers are created fine and working but status in Dashboard and in CLI is Offline to Load Balancer, listener and pool (health check status is ONLINE). Ports in Octavia container: root@juju-b73276-1-lxd-11:/var/log/juju# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 4a:60:15:8e:b5:2b brd ff:ff:ff:ff:ff:ff 5: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether ce:28:17:4a:2b:16 brd ff:ff:ff:ff:ff:ff 6: o-hm0: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:32:bf:6c brd ff:ff:ff:ff:ff:ff inet6 fe80::f816:3eff:fe32:bf6c/64 scope link valid_lft forever preferred_lft forever 7: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 link/ether 62:be:e5:d5:8a:bb brd ff:ff:ff:ff:ff:ff inet6 fe80::60be:e5ff:fed5:8abb/64 scope link valid_lft forever preferred_lft forever 70: eth0@if71: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:16:3e:a6:05:91 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.118.0.151/24 brd 10.118.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fea6:591/64 scope link valid_lft forever preferred_lft forever OVS switch in Octavia container: root@juju-b73276-1-lxd-11:/var/log/juju# ovs-vsctl show ac9d2104-c8c0-4ce7-a1fc-84c386d341bf Bridge br-int fail_mode: secure datapath_type: system Port ovn-os-hos-2 Interface ovn-os-hos-2 type: geneve options: {csum="true", key=flow, remote_ip="10.118.0.3"} Port ovn-os-hos-1 Interface ovn-os-hos-1 type: geneve options: {csum="true", key=flow, remote_ip="10.118.0.6"} Port ovn-os-hos-4 Interface ovn-os-hos-4 type: geneve options: {csum="true", key=flow, remote_ip="10.118.0.2"} Port o-hm0 Interface o-hm0 type: internal Port br-int Interface br-int type: internal Port ovn-os-hos-0 Interface ovn-os-hos-0 type: geneve options: {csum="true", key=flow, remote_ip="10.118.0.4"} Port ovn-os-hos-3 Interface ovn-os-hos-3 type: geneve options: {csum="true", key=flow, remote_ip="10.118.0.5"} ovs_version: "2.15.0" ovn-central northd: switch b0ea442b-a7b2-4bc0-a814-a98bb36fdce2 (neutron-b2d27440-e8be-4da2-8d70-e56b609a0dcf) (aka octavia_mng_network) port 4ed8caa7-4871-4917-a8a0-a63ec94bc440 type: localport addresses: ["fa:16:3e:7a:b5:30"] port provnet-67100d96-9277-4a8d-8c59-6bbd4ea84e69 type: localnet tag: 903 addresses: ["unknown"] port 89354ec0-8bb7-48ef-946b-2805609a3b9b (aka octavia-health-manager-octavia-10-listen-port) addresses: ["fa:16:3e:63:7b:a2 10.11.0.116"] 10.11.0.116 is on the deployed healthmanager port, network is physnet connected to an outside router with 0.1 as gateway. For some reason, the octavia unit doesn't claim this port on its chassis. BEFORE upgrading to charm 112 (on charm 34), I could ping the amphora image from the external router and ssh to it, but it wasn't able to push healtchecks to 10.11.0.116 port 5555 because it was unreachable from amphora image. Now, because of changes in 112, the unit stays in blocked state and I cannot deploy a load balancer to test the behavior from 34. OVN southd: root@juju-b73276-3-lxd-3:~# ovn-sbctl show Chassis os-host-3.maas hostname: os-host-3.maas Encap geneve ip: "10.118.0.4" options: {csum="true"} Port_Binding "3dc6654b-a36c-495b-a7fd-626630ff70f6" Chassis os-host-5.maas hostname: os-host-5.maas Encap geneve ip: "10.118.0.6" options: {csum="true"} Port_Binding "3d4647c9-4e1e-469f-9f36-943cef583fb8" Port_Binding "41cbaf82-f099-488e-b224-302de44d3519" Port_Binding "8fc333ee-36f7-40d0-a5e2-13d318621a79" Port_Binding "dd7e7225-73a2-4c4e-bd33-6ce79e7ec727" Chassis os-host-4-ceph.maas hostname: os-host-4-ceph.maas Encap geneve ip: "10.118.0.5" options: {csum="true"} Chassis os-host-2.maas hostname: os-host-2.maas Encap geneve ip: "10.118.0.3" options: {csum="true"} Port_Binding "3b42f86a-bc6b-4cbd-92aa-2cbbb19b46f3" Port_Binding "870eae6d-340f-4bfb-b357-7e6aa26cbe29" Port_Binding "c5b7483b-2ff1-45ba-b1dc-05361b0ff140" Port_Binding "c41b849f-b62a-44db-97fb-178c9e456040" Port_Binding "69acfa39-ea94-4365-8831-57fb986ecf85" Chassis juju-b73276-1-lxd-11.maas hostname: juju-b73276-1-lxd-11.maas Encap geneve ip: "10.118.0.151" options: {csum="true"} Chassis os-host-1.maas hostname: os-host-1.maas Encap geneve ip: "10.118.0.2" options: {csum="true"} Port_Binding cr-lrp-d4100feb-d459-4d59-93c2-2c8243ea2a0b Port_Binding cr-lrp-32f36b13-d533-4995-9636-b82ae8b44a0e juju-b73276-1-lxd-11.maas / 10.118.0.151 is the octavia unit, hostname is the same : root@juju-b73276-1-lxd-11:~# hostname -f juju-b73276-1-lxd-11.maas openstack port show: root@maas-region:~/openstack# openstack port show 89354ec0-8bb7-48ef-946b-2805609a3b9b +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | juju-b73276-1-lxd-11.maas | | binding_profile | | | binding_vif_details | | | binding_vif_type | binding_failed | | binding_vnic_type | normal | | created_at | 2021-10-07T06:53:22Z | | data_plane_status | None | | description | | | device_id | | | device_owner | neutron:LOADBALANCERV2 | | dns_assignment | None | | dns_domain | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='10.11.0.116', subnet_id='d7cabad2-647f-464d-9d2d-c86d2ff93f15' | | id | 89354ec0-8bb7-48ef-946b-2805609a3b9b | | ip_allocation | immediate | | location | cloud='', project.domain_id=, project.domain_name=, project.id='d6f911ca2f714faa8f676aa66d651631', project.name=, region_name='RegionOne', zone= | | mac_address | fa:16:3e:63:7b:a2 | | name | octavia-health-manager-octavia-10-listen-port | | network_id | b2d27440-e8be-4da2-8d70-e56b609a0dcf | | port_security_enabled | True | | project_id | d6f911ca2f714faa8f676aa66d651631 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 84 | | security_group_ids | 2bc26f77-1dbd-4c52-8876-be4c9f6b11d5 | | status | DOWN | | tags | charm-octavia, charm-octavia-octavia-10 | | trunk_details | None | | updated_at | 2021-10-07T07:41:54Z | +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ No errors in OVN controller log on octavia unit. Juju logs: 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Initializing Leadership Layer (is leader) 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:64:default_update_status 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack_api.py:6:default_amqp_connection 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack_api.py:20:default_setup_database 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack_api.py:37:default_setup_endpoint_connection 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:82:check_really_is_update_status 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:93:run_default_update_status 2021-10-07 07:41:46 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/layer_openstack.py:126:default_request_certificates 2021-10-07 07:41:47 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/octavia_handlers.py:47:sdn_joined 2021-10-07 07:41:47 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: reactive/octavia_handlers.py:127:action_setup_hm_port 2021-10-07 07:41:49 INFO unit.octavia/10.juju-log server.go:325 toggling port 89354ec0-8bb7-48ef-946b-2805609a3b9b (admin_state_up: True status: DOWN binding:vif_type: binding_failed) 2021-10-07 07:42:01 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: hooks/relations/tls-certificates/requires.py:79:joined:certificates 2021-10-07 07:42:01 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: hooks/relations/ovsdb-subordinate/requires.py:141:joined:ovsdb-subordinate 2021-10-07 07:42:01 INFO unit.octavia/10.juju-log server.go:325 Invoking reactive handler: hooks/relations/ovsdb-cms/requires.py:43:joined:ovsdb-cms 2021-10-07 07:42:01 INFO unit.octavia/10.juju-log server.go:325 ovsdb-cms: OVSDBCMSRequires -> joined 2021-10-07 07:42:01 INFO unit.octavia/10.juju-log server.go:325 ovsdb-cms: OVSDBCMSRequires -> joined 2021-10-07 07:42:03 INFO juju.worker.uniter.operation runhook.go:152 ran "update-status" hook (via explicit, bespoke hook script) I deleted the port and ran configure-resources for the charm to recreated it, rebooted the unit, did pause/resume, ran configure-resources again, status is the same.