Looking at the most recent run there I see this on one of the Octavia units:
2020-10-20T13:24:46.755Z|00729|binding|INFO|Dropped 20 log messages in last 43 seconds (most recently, 35 seconds ago) due to excessive rate
2020-10-20T13:24:46.755Z|00730|binding|INFO|Not claiming lport 21b287ea-4595-441c-9915-1f2eef110670, chassis juju-bec91d-6-lxd-7.prodymcprodface.solutionsqa requested-chassis juju-bec91d-6-lxd-7
The same node also have a lot of failures in the octavia-worker log most likely because it is unable to connact to any of the Amphorae.
The possible root causes for this:
- The Octavia charm uses incorrect hostname when creating its port in Neutron
- The Octavia unit is provided with the wrong fqdn at deploy time making the Open vSwitch init script record an incorrect hostname in the Open_vSwitch table (bug 1896630)
Having 1/3 nodes not working properly would probably be a plausible cause of the rubbish rally results too.
Looking at the most recent run there I see this on one of the Octavia units:
2020-10- 20T13:24: 46.755Z| 00729|binding| INFO|Dropped 20 log messages in last 43 seconds (most recently, 35 seconds ago) due to excessive rate 20T13:24: 46.755Z| 00730|binding| INFO|Not claiming lport 21b287ea- 4595-441c- 9915-1f2eef1106 70, chassis juju-bec91d- 6-lxd-7. prodymcprodface .solutionsqa requested-chassis juju-bec91d-6-lxd-7
2020-10-
The same node also have a lot of failures in the octavia-worker log most likely because it is unable to connact to any of the Amphorae.
The possible root causes for this:
- The Octavia charm uses incorrect hostname when creating its port in Neutron
- The Octavia unit is provided with the wrong fqdn at deploy time making the Open vSwitch init script record an incorrect hostname in the Open_vSwitch table (bug 1896630)
Having 1/3 nodes not working properly would probably be a plausible cause of the rubbish rally results too.