`configure-resources` action doesn't assure o-hm0 port is created

Bug #1984192 reported by Nobuto Murata
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Neutron API OVN Plugin Charm
New
Undecided
Unassigned
OpenStack Octavia Charm
New
Undecided
Unassigned

Bug Description

focal-yoga

Even though I confirmed that all units are active except for octavia itself, `configure-resources` action didn't create o-hm0. And it doesn't consider it's a critical error.

+ juju run-action --wait octavia/leader configure-resources
unit-octavia-0:
  UnitId: octavia/0
  id: "6"
  results:
    Stderr: |
      ovs-vsctl: no row "o-hm0" in table Interface
      Synchronizing state of octavia-api.service with SysV service script with /lib/systemd/systemd-sysv-install.
      Executing: /lib/systemd/systemd-sysv-install disable octavia-api
      Unit /etc/systemd/system/octavia-api.service is masked, ignoring.
      ovs-vsctl: no row "o-hm0" in table Interface <<<<<<<<<<<<<<<<<<<
    Stdout: |
      Reading package lists...
      Building dependency tree...
      Reading state information...
      0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
      Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
      Hit:2 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/yoga InRelease
      Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
      Hit:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease
      Get:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
      Fetched 108 kB in 1s (81.7 kB/s)
      Reading package lists...
      Adding user systemd-network to group octavia
      inactive
      octavia-api (enabled by site administrator)
  status: completed <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
  timing:
    completed: 2022-08-10 12:26:39 +0000 UTC
    enqueued: 2022-08-10 12:25:41 +0000 UTC
    started: 2022-08-10 12:25:42 +0000 UTC

^^^ no obvious error but "completed"

Around that time, Neutron was giving InternalServerError temporarily.

> Neutron API not available yet, deferring port discovery. ("('neutron', 'ports', InternalServerError())")

There may be some places to fundamentally fix this, but at least `configure-resources` action shouldn't ignore this critical error.

2022-08-10 12:26:22 DEBUG unit.octavia/0.configure-resources logger.go:60 Reading package lists...
2022-08-10 12:26:22 DEBUG unit.octavia/0.configure-resources logger.go:60 Building dependency tree...
2022-08-10 12:26:22 DEBUG unit.octavia/0.configure-resources logger.go:60 Reading state information...
2022-08-10 12:26:23 DEBUG unit.octavia/0.configure-resources logger.go:60 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
2022-08-10 12:26:23 DEBUG unit.octavia/0.configure-resources logger.go:60 Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2022-08-10 12:26:23 DEBUG unit.octavia/0.configure-resources logger.go:60 Hit:2 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/yoga InRelease
2022-08-10 12:26:23 DEBUG unit.octavia/0.configure-resources logger.go:60 Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
2022-08-10 12:26:24 DEBUG unit.octavia/0.configure-resources logger.go:60 Hit:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2022-08-10 12:26:24 DEBUG unit.octavia/0.configure-resources logger.go:60 Get:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
2022-08-10 12:26:24 DEBUG unit.octavia/0.configure-resources logger.go:60 Fetched 108 kB in 1s (81.7 kB/s)
2022-08-10 12:26:26 DEBUG unit.octavia/0.configure-resources logger.go:60 Reading package lists...
2022-08-10 12:26:31 INFO unit.octavia/0.juju-log server.go:316 Running maybe_do_policyd_overrides
2022-08-10 12:26:31 DEBUG unit.octavia/0.juju-log server.go:316 Cleaning path: /etc/octavia/policy.d
2022-08-10 12:26:31 INFO unit.octavia/0.juju-log server.go:316 Adding user systemd-network to group octavia
2022-08-10 12:26:31 DEBUG unit.octavia/0.configure-resources logger.go:60 Adding user systemd-network to group octavia
2022-08-10 12:26:31 DEBUG unit.octavia/0.configure-resources logger.go:60 inactive
2022-08-10 12:26:31 WARNING unit.octavia/0.configure-resources logger.go:60 Synchronizing state of octavia-api.service with SysV service script with /lib/systemd/systemd-sysv-install.
2022-08-10 12:26:31 WARNING unit.octavia/0.configure-resources logger.go:60 Executing: /lib/systemd/systemd-sysv-install disable octavia-api
2022-08-10 12:26:31 WARNING unit.octavia/0.configure-resources logger.go:60 Unit /etc/systemd/system/octavia-api.service is masked, ignoring.
2022-08-10 12:26:32 INFO unit.octavia/0.juju-log server.go:316 Invoking reactive handler: reactive/octavia_handlers.py:78:setup_endpoint_connection
2022-08-10 12:26:32 INFO unit.octavia/0.juju-log server.go:316 Invoking reactive handler: reactive/octavia_handlers.py:107:setup_neutron_lbaas_proxy
2022-08-10 12:26:32 INFO unit.octavia/0.juju-log server.go:316 Invoking reactive handler: reactive/octavia_handlers.py:129:action_setup_hm_port
2022-08-10 12:26:32 DEBUG unit.octavia/0.juju-log server.go:316 entering setup_hm_port for None
2022-08-10 12:26:32 DEBUG unit.octavia/0.juju-log server.go:316 running setup_hm_port on juju-db1430-2-lxd-3.maas
2022-08-10 12:26:34 DEBUG unit.octavia/0.juju-log server.go:316 Neutron API not available yet, deferring port discovery. ("('neutron', 'ports', InternalServerError())")
2022-08-10 12:26:34 INFO unit.octavia/0.juju-log server.go:316 Invoking reactive handler: reactive/octavia_handlers.py:135:setup_hm_port
2022-08-10 12:26:34 DEBUG unit.octavia/0.juju-log server.go:316 entering setup_hm_port for None
2022-08-10 12:26:34 DEBUG unit.octavia/0.juju-log server.go:316 running setup_hm_port on juju-db1430-2-lxd-3.maas
2022-08-10 12:26:36 DEBUG unit.octavia/0.juju-log server.go:316 Neutron API not available yet, deferring port discovery. ("('neutron', 'ports', InternalServerError())")
2022-08-10 12:26:36 INFO unit.octavia/0.juju-log server.go:316 Invoking reactive handler: reactive/octavia_handlers.py:199:update_controller_ip_port_list
2022-08-10 12:26:36 DEBUG unit.octavia/0.juju-log server.go:316 departing_unit:None
2022-08-10 12:26:36 INFO unit.octavia/0.juju-log server.go:316 Invoking reactive handler: reactive/octavia_handlers.py:252:render
2022-08-10 12:26:36 WARNING unit.octavia/0.configure-resources logger.go:60 ovs-vsctl: no row "o-hm0" in table Interface
2022-08-10 12:26:36 DEBUG unit.octavia/0.juju-log server.go:316 Unable query OVS, not ready? ("Command '['ovs-vsctl', 'get', 'Interface', 'o-hm0', 'external_ids:attached-mac']' returned non-zero exit status 1.")

Revision history for this message
Nobuto Murata (nobuto) wrote :
Revision history for this message
Nobuto Murata (nobuto) wrote :
Revision history for this message
Nobuto Murata (nobuto) wrote :

Around that time stamp, neutron-api server had a lot of trace backs.

2022-08-10 12:26:33.954 68233 ERROR neutron.plugins.ml2.managers [req-6443a79a-27da-4d40-997f-e0a67a7906d4 69340e00bce3472da72577a7f09173a2 6c0ccb4a6ba04dd284cec73002814108 - 21f0738d38024c9ca6d86d112d45fedb 21f0738d38024c9ca6d86d112d45fedb] Mechanism driver 'ovn' failed in create_port_postcommit: RuntimeError: Port group neutron_pg_drop does not exist

2022-08-10 12:26:33.958 68233 ERROR neutron.plugins.ml2.plugin [req-6443a79a-27da-4d40-997f-e0a67a7906d4 69340e00bce3472da72577a7f09173a2 6c0ccb4a6ba04dd284cec73002814108 - 21f0738d38024c9ca6d86d112d45fedb 21f0738d38024c9ca6d86d112d45fedb] mechanism_manager.create_port_postcommit failed, deleting port 'f7c17164-5c78-47db-9c1c-80d323e1c6f4': neutron.plugins.ml2.common.exceptions.MechanismDriverError

2022-08-10 12:28:53.104 68234 ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovsdb_monitor [req-31432dcd-7a60-4a5d-9a12-03460c6ec7e6 - - - - -] HashRing is empty, error: Hash Ring returned empty whe
n hashing "b'93a4cfad-6182-42bf-9842-93e5f9d15720'". This should never happen in a normal situation, please check the status of your cluster: neutron.common.ovn.exceptions.HashRingIsEmpty: Hash Ring returned empty when hashing "b'93a4cfad-6182-42bf-9842-93e5f9d15720'". This should never happen in a normal situation, please check the status of your cluster

affects: charm-neutron-api → charm-neutron-api-plugin-ovn
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.