mgmt network only works with manual netplan config and amphora vm is not reachable

Bug #1959334 reported by Dominik Bender
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Octavia Charm
New
Undecided
Unassigned

Bug Description

We deployed an charmed openstack-base bundle (lxd) with octavia. All charms are fine (green) and also the hosts and containers up to date (focal/xena). In general the networking works fine. We use VLAN over physnet1 and geneve for our vm's without any problems.

We tested the default v6 mgmt network -> don't work. After that we tried the manual way (create an ipv4 geneve network and openstack sec-groups). The port binding was successful but the o-hm0 net stuck in UNKOWN state and doesnt receive the ip address that we can see in the port config. So we can't ping the other octavia-healths-managers. With an manual netplan config with the ip/mac from the port config we got the health-managers running. Now they can ping each other. The second big problem is we can't reach the amphora vm.

We noticed this bug "IPv6 mgmt network not working, octavia can't talk to Amphora instance" (https://bugs.launchpad.net/charm-octavia/+bug/1911788) but it seems we have no problem with fqdns.

##########################
App Version Status Scale Charm Store Channel Rev OS Message
octavia 9.0.0 active 3 octavia charmstore stable 38 ubuntu Unit is ready
octavia-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered
octavia-mysql-router 8.0.27 active 3 mysql-router charmstore stable 15 ubuntu Unit is ready
octavia-ovn-chassis 21.09.0 active 3 ovn-chassis charmstore stable 21 ubuntu Unit is ready

Unit Workload Agent Machine Public address Ports Message
octavia/9* active idle 0/lxd/20 10.6.97.231 9876/tcp Unit is ready
  octavia-hacluster/9* active idle 10.6.97.231 Unit is ready and clustered
  octavia-mysql-router/9* active idle 10.6.97.231 Unit is ready
  octavia-ovn-chassis/9* active idle 10.6.97.231 Unit is ready
octavia/10 active idle 1/lxd/21 10.6.97.233 9876/tcp Unit is ready
  octavia-hacluster/11 active idle 10.6.97.233 Unit is ready and clustered
  octavia-mysql-router/11 active idle 10.6.97.233 Unit is ready
  octavia-ovn-chassis/11 active idle 10.6.97.233 Unit is ready
octavia/11 active idle 2/lxd/22 10.6.97.232 9876/tcp Unit is ready
  octavia-hacluster/10 active idle 10.6.97.232 Unit is ready and clustered
  octavia-mysql-router/10 active idle 10.6.97.232 Unit is ready
  octavia-ovn-chassis/10 active idle 10.6.97.232 Unit is ready

Machine State DNS Inst id Series AZ Message
0 started 10.6.1.112 loc6-rack16-srv12 focal de1 Deployed
0/lxd/20 started 10.6.97.231 juju-872048-0-lxd-20 focal de1 Container started
1 started 10.6.1.113 loc6-rack16-srv13 focal de1 Deployed
1/lxd/21 started 10.6.97.233 juju-872048-1-lxd-21 focal de1 Container started
2 started 10.6.1.114 loc6-rack16-srv14 focal de1 Deployed
2/lxd/22 started 10.6.97.232 juju-872048-2-lxd-22 focal de1 Container started
##########################

Without manual netplan config we dont see ipv4 address on o-hm0 interface:

##########################
juju-872048-2-lxd-22
-------
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ca:70:c8:3d:c9:a1 brd ff:ff:ff:ff:ff:ff
3: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e6:e0:1e:80:45:92 brd ff:ff:ff:ff:ff:ff
4: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 8e:34:d7:93:2a:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8c34:d7ff:fe93:2a61/64 scope link
       valid_lft forever preferred_lft forever
5: o-hm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether fa:16:3e:c3:4c:39 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f816:3eff:fec3:4c39/64 scope link
       valid_lft forever preferred_lft forever
126: eth0@if127: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:dc:9f:70 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.6.97.232/18 brd 10.6.127.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fedc:9f70/64 scope link
       valid_lft forever preferred_lft forever
##########################

##########################
openstack port show ebe2e485-783d-4014-8708-35e1935b26a2
+-------------------------+-------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+-------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | juju-872048-2-lxd-22.maas |
| binding_profile | |
| binding_vif_details | port_filter='True' |
| binding_vif_type | ovs |
| binding_vnic_type | normal |
| created_at | 2022-01-26T19:16:27Z |
| data_plane_status | None |
| description | |
| device_id | |
| device_owner | neutron:LOADBALANCERV2 |
| device_profile | None |
| dns_assignment | fqdn='host-10-241-0-173.c1.de1.c.mynet.net.', hostname='host-10-241-0-173', ip_address='10.241.0.173' |
| dns_domain | None |
| dns_name | |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.241.0.173', subnet_id='24ab12bf-f44b-4201-8dd5-1744140d0727' |
| id | ebe2e485-783d-4014-8708-35e1935b26a2 |
| ip_allocation | immediate |
| mac_address | fa:16:3e:c3:4c:39 |
| name | octavia-health-manager-octavia-11-listen-port |
| network_id | 65e62639-4d27-48c8-b910-392a562df940 |
| numa_affinity_policy | None |
| port_security_enabled | True |
| project_id | 2a01fcbcbf5c4de9a29375830e9e2b29 |
| propagate_uplink_status | None |
| qos_network_policy_id | None |
| qos_policy_id | None |
| resource_request | None |
| revision_number | 25 |
| security_group_ids | cbc8d2b2-e45d-4fb6-aa7e-be3b36989380 |
| status | ACTIVE |
| tags | charm-octavia, charm-octavia-octavia-11 |
| trunk_details | None |
| updated_at | 2022-01-26T21:22:08Z |
+-------------------------+-------------------------------------------------------------------------------------------------------+
##########################

##########################
openstack network agent list | grep lxd
| juju-872048-0-lxd-20.maas | OVN Controller agent | juju-872048-0-lxd-20.maas | | :-) | UP | ovn-controller |
| juju-872048-1-lxd-21.maas | OVN Controller agent | juju-872048-1-lxd-21.maas | | :-) | UP | ovn-controller |
| juju-872048-2-lxd-22.maas | OVN Controller agent | juju-872048-2-lxd-22.maas | | :-) | UP | ovn-controller |
##########################

##########################
juju-872048-0-lxd-20.maas
/var/log/ovn/ovn-controller.log
----
2022-01-26T19:17:03.141Z|00069|binding|INFO|Claiming lport 79cc186d-2fd6-4cb2-9f32-68dd13996fbe for this chassis.
2022-01-26T19:17:03.141Z|00070|binding|INFO|79cc186d-2fd6-4cb2-9f32-68dd13996fbe: Claiming fa:16:3e:e5:04:e5 10.241.3.80
2022-01-26T19:17:03.167Z|00071|binding|INFO|Setting lport 79cc186d-2fd6-4cb2-9f32-68dd13996fbe up in Southbound
2022-01-26T19:17:03.168Z|00072|binding|INFO|Setting lport 79cc186d-2fd6-4cb2-9f32-68dd13996fbe ovn-installed in OVS
##########################

##########################
juju-872048-1-lxd-21.maas
/var/log/ovn/ovn-controller.log
----
2022-01-26T19:18:45.737Z|00049|binding|INFO|Claiming lport 4fd7e3fd-7fe7-4682-96dc-815d3597acbd for this chassis.
2022-01-26T19:18:45.737Z|00050|binding|INFO|4fd7e3fd-7fe7-4682-96dc-815d3597acbd: Claiming fa:16:3e:48:33:01 10.241.1.136
2022-01-26T19:18:45.781Z|00051|binding|INFO|Setting lport 4fd7e3fd-7fe7-4682-96dc-815d3597acbd ovn-installed in OVS
2022-01-26T19:18:45.781Z|00052|binding|INFO|Setting lport 4fd7e3fd-7fe7-4682-96dc-815d3597acbd up in Southbound
##########################

##########################
juju-872048-2-lxd-22.maas
/var/log/ovn/ovn-controller.log
----
2022-01-26T19:16:29.397Z|00019|binding|INFO|Claiming lport ebe2e485-783d-4014-8708-35e1935b26a2 for this chassis.
2022-01-26T19:16:29.397Z|00020|binding|INFO|ebe2e485-783d-4014-8708-35e1935b26a2: Claiming fa:16:3e:c3:4c:39 10.241.0.173
2022-01-26T19:16:29.442Z|00021|binding|INFO|Setting lport ebe2e485-783d-4014-8708-35e1935b26a2 up in Southbound
2022-01-26T19:16:29.442Z|00022|binding|INFO|Setting lport ebe2e485-783d-4014-8708-35e1935b26a2 ovn-installed in OVS
##########################

##########################
openstack server show 661f8a9c-0c8b-40d9-889a-45c70599ab19
+-------------------------------------+-------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------------------+-------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | de1 |
| OS-EXT-SRV-ATTR:host | loc6-rack16-srv12.maas |
| OS-EXT-SRV-ATTR:hypervisor_hostname | loc6-rack16-srv12.maas |
| OS-EXT-SRV-ATTR:instance_name | instance-0000007e |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2022-01-27T19:44:20.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | lb_mgmt=10.241.3.88 |
| config_drive | True |
| created | 2022-01-27T19:44:08Z |
| flavor | charm-octavia (577702e9-5b53-49b0-b7cd-d81848c6038f) |
| hostId | 59b8cf82e1696d71c6627806e5103992a6c1b62ab505f214eef33873 |
| id | 661f8a9c-0c8b-40d9-889a-45c70599ab19 |
| image | amphora-haproxy-x86_64-ubuntu-20.04-20220118 (626f217f-03f6-466c-b7a9-ffdf923454d2) |
| key_name | None |
| name | amphora-30bea638-1967-405b-8c73-b8166a5d6636 |
| progress | 0 |
| project_id | 2a01fcbcbf5c4de9a29375830e9e2b29 |
| properties | |
| security_groups | name='octavia' |
| status | ACTIVE |
| updated | 2022-01-27T19:44:20Z |
| user_id | 9cd628e308704cb9960677312f1e5e67 |
| volumes_attached | |
+-------------------------------------+-------------------------------------------------------------------------------------+
##########################

##########################

openstack port list | grep octavia
| 4fd7e3fd-7fe7-4682-96dc-815d3597acbd | octavia-health-manager-octavia-10-listen-port | fa:16:3e:48:33:01 | ip_address='10.241.1.136', subnet_id='24ab12bf-f44b-4201-8dd5-1744140d0727' | ACTIVE |
| 79cc186d-2fd6-4cb2-9f32-68dd13996fbe | octavia-health-manager-octavia-9-listen-port | fa:16:3e:e5:04:e5 | ip_address='10.241.3.80', subnet_id='24ab12bf-f44b-4201-8dd5-1744140d0727' | ACTIVE |
| ebe2e485-783d-4014-8708-35e1935b26a2 | octavia-health-manager-octavia-11-listen-port | fa:16:3e:c3:4c:39 | ip_address='10.241.0.173', subnet_id='24ab12bf-f44b-4201-8dd5-1744140d0727' | ACTIVE |
| f9e9a7ac-847a-4ed1-b025-0c6f7f402d4e | octavia-lb-9d397280-c1c3-47a4-8d05-804521dd3bdf | fa:16:3e:36:b6:81 | ip_address='10.1.21.31', subnet_id='b7c3c6e4-4a97-41ba-970f-b03a19a31571' | DOWN |
##########################

########### Amphora VM ############

[ 11.690075] cloud-init[643]: Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'init' at Fri, 28 Jan 2022 17:24:39 +0000. Up 11.53 seconds.
[ 11.692265] cloud-init[643]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
[ 11.696600] cloud-init[643]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
[ 11.700250] cloud-init[643]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
[ 11.702964] cloud-init[643]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
[ 11.706084] cloud-init[643]: ci-info: | ens3 | True | 10.241.0.178 | 255.255.192.0 | global | fa:16:3e:9f:61:48 |
[ 11.709796] cloud-init[643]: ci-info: | ens3 | True | fe80::f816:3eff:fe9f:6148/64 | . | link | fa:16:3e:9f:61:48 |
[ 11.711865] cloud-init[643]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
[ 11.715521] cloud-init[643]: ci-info: | lo | True | ::1/128 | . | host | . |
[ 11.720288] cloud-init[643]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
[ 11.723157] cloud-init[643]: ci-info: +++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
[ 11.726504] cloud-init[643]: ci-info: +-------+-----------------+------------+-----------------+-----------+-------+
[ 11.729788] cloud-init[643]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
[ 11.732765] cloud-init[643]: ci-info: +-------+-----------------+------------+-----------------+-----------+-------+
[ 11.735250] cloud-init[643]: ci-info: | 0 | 0.0.0.0 | 10.241.0.1 | 0.0.0.0 | ens3 | UG |
[ 11.738601] cloud-init[643]: ci-info: | 1 | 10.241.0.0 | 0.0.0.0 | 255.255.192.0 | ens3 | U |
[ 11.741094] cloud-init[643]: ci-info: | 2 | 169.254.169.254 | 10.241.0.2 | 255.255.255.255 | ens3 | UGH |
[ 11.748343] cloud-init[643]: ci-info: +-------+-----------------+------------+-----------------+-----------+-------+

description: updated
summary: - mgmt network only works with manual netplan config and octavia vm is not
+ mgmt network only works with manual netplan config and amphora vm is not
reachable
description: updated
Revision history for this message
Dominik Bender (ephermeral) wrote :
Download full text (6.6 KiB)

/var/log/octavia/octavia-driver-agent.log

022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection [-] [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'file_ctrl', 'system lib'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'system lib')]: OpenSSL.SSL.Error: [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'file_ctrl', 'system lib'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'system lib')]
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection Traceback (most recent call last):
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 108, in run
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection self.idl.run()
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/ovs/db/idl.py", line 247, in run
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection self._session.run()
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/ovs/jsonrpc.py", line 532, in run
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection self.__connect()
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/ovs/jsonrpc.py", line 467, in __connect
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection error, self.stream = ovs.stream.Stream.open(name)
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/ovs/stream.py", line 192, in open
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection error, sock = cls._open(suffix, dscp)
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/ovs/stream.py", line 795, in _open
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection ctx.use_privatekey_file(Stream._SSL_private_key_file)
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 912, in use_privatekey_file
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection self._raise_passphrase_exception()
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 888, in _raise_passphrase_exception
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection _raise_current_error()
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection File "/usr/lib/python3/dist-packages/OpenSSL/_util.py", line 57, in exception_from_error_queue
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection raise exception_type(errors)
2022-01-28 20:06:43.579 224058 ERROR ovsdbapp.backend.ovs_idl.connection OpenSSL.SSL.Error: [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'file_ctrl', 'system lib'), ('SSL routines', 'SSL_CTX_use_PrivateKey_...

Read more...

Revision history for this message
Dominik Bender (ephermeral) wrote :

We use a seperate overlay-network/juju space: overlay-c1. The charm doesn't deploy the additional network interface so we fixed it with add the space to constraints.

octavia:
...
  constraints: spaces=overlay-c1
...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.