Load balancer created on K8s on top of Openstack Octavia are not working
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
CDK Addons |
Fix Released
|
High
|
Kevin W Monroe |
Bug Description
We have a k8s cluster created on top of OpenStack. K8s release 1.13/edge
charm-k8s-master rev. 724
charm-openstack
When creating a new loadbalancer, this is not able to reach its backend pool members.
The loadbalancer created underneath is an amphora VM and the pools members are
kubernetes-workers on a given port
If we try to curl the loadbalancer on its VIP we get a failure.
grpcurl -connect-timeout 10 -plaintext <LB_IP>:80 describe
Failed to dial target host "REDACTED:80": context deadline exceeded
If we add to the openstack default security group of the kubernetes-workers VMs in openstack
a rule to allow the traffic from the security group of the amphora VM on the specific port with:
openstack security group rule create --ingress --protocol tcp --remote-group <GROUP_
then we are able to use the loadbalancer
description: | updated |
Changed in charm-kubernetes-master: | |
status: | Triaged → Invalid |
Changed in charm-openstack-integrator: | |
status: | Triaged → Invalid |
Changed in cdk-addons: | |
status: | In Progress → Fix Committed |
Changed in charm-kubernetes-master: | |
status: | In Progress → Fix Committed |
Changed in cdk-addons: | |
status: | Fix Committed → Fix Released |
Changed in charm-kubernetes-master: | |
status: | Fix Committed → Fix Released |
Changed in cdk-addons: | |
assignee: | nobody → Kevin W Monroe (kwmonroe) |
Changed in cdk-addons: | |
status: | Fix Committed → Fix Released |
field-critical is subscribed to this.
In the future, please comment on the issue when subscribing field SLA to issues, as defined in the field SLA process for escalating to product engineering. It's easy for us to miss it otherwise.
https:/ /wiki.canonical .com/engineerin g/FieldSLA