Shouldn't wait for kube-api-endpoint relation if openstack relation exists

Bug #1854930 reported by Aurelien Lourot
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Charmed Kubernetes Bundles
Fix Released
Medium
Unassigned

Bug Description

When deploying the CDK bundle ( https://api.jujucharms.com/charmstore/v5/bundle/canonical-kubernetes-785/archive/bundle.yaml ) with the openstack-lb-overlay ( https://github.com/charmed-kubernetes/bundle/blob/master/overlays/openstack-lb-overlay.yaml ) as advertised on https://ubuntu.com/kubernetes/docs/openstack-integration , the kubernetes-master charm ends up in the blocked status:

```
kubernetes-master/0* blocked idle 2 10.0.8.205 Waiting for kube-api-endpoint relation
```

This is because normally this relation is implemented by the kubeapi-load-balancer charm but the overlay removes this charm. The overlay instead deploys the openstack-integrator charm which implements the 'openstack' relation.

Here is where the 'blocked' status gets reported: https://github.com/charmed-kubernetes/charm-kubernetes-master/blob/d58cd51/reactive/kubernetes_master.py#L681

I suggest we stop reporting this 'blocked' status and allow the charm to turn green in 'juju status' output, at least in the case where the 'openstack' relation is implemented.

What do you think?

Revision history for this message
Cory Johns (johnsca) wrote :

TL;DR: That relation actually is important, and the blocked status is probably correct.

It's been a bit since I worked on this. I recall there being a configuration which allows this to work, but it seems like something perhaps got missed from the bundle fragment and charm code.

The underlying issue is that the way the kube-api-endpoint relation is used is rather messy. The intention is that it is the relation that provides the API addresses for other services, most importantly the workers, to connect to. Conversely, the loadbalancer relation is intended to allow the master to request loadbalancer addresses to advertise as the API addresses. In theory, that should mean that the master would get LB addresses from the loadbalancer relation[1], if present, and then send either the LB addresses or its direct, internal address to the workers over the kube-api-endpoint relation.

In current practice, however, kubeapi-load-balancer gets injected in between the masters and workers on the kube-api-endpoint relation, and then expects the master to always send the internal addresses, which it then uses to create LBs and then sends along the LB addresses rather than the internal addresses it was given, in effect acting like a modifying proxy.[2] It then also sends the created LB addresses back to the master over the loadbalancer relation, so that the master can advertise that to external clients as well via the kubeconfig. This means that instead of using the request addresses on the loadbalancer relation to create the loadbalancers, kubeapi-load-balancer crosses the wire a bit between those two relations, tying them together in a way that it shouldn't really do.

I think the best thing would be to clean up how those relations are used by kubeapi-load-balancer, but there will be upgrade transition issues in sorting that out, so that will require some thinking. (Perhaps it would just be enough to have kubeapi-load-balancer turn the kube-api-endpoint relation into a straight proxy and switch the master to always send the LB addresses, but that would require that the charms be upgraded at the same time.)

Again, I could have sworn when I left this that there was a configuration that worked and didn't run into this issue, but I can't recall what that was at this point, and looking at it now, it seems like it needs to be fixed.

[1]: It's also possible to get LB addresses from manual config, or haproxy, but in the end, the effect ought to be the same no matter where the LB addresses come from.

[2]: It's also complicated by the fact that the manual LB or haproxy config options might be passed instead so that they act as an external front to the LB managed by kubeapi-load-balancer.

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

Thanks for the input, you're right, here is I think the config you have in mind:

  loadbalancer-ips:
    type: string
    description: |
      Space separated list of IP addresses of loadbalancers in front of the control plane.
      These can be either virtual IP addresses that have been floated in front of the control
      plane or the IP of a loadbalancer appliance such as an F5. Workers will alternate IP
      addresses from this list to distribute load - for example If you have 2 IPs and 4 workers,
      each IP will be used by 2 workers. Note that this will only work if kubeapi-load-balancer
      is not in use and there is a relation between kubernetes-master:kube-api-endpoint and
      kubernetes-worker:kube-api-endpoint. If using the kubeapi-load-balancer, see the
      loadbalancer-ips configuration variable on the kubeapi-load-balancer charm.
    default: ""

https://github.com/charmed-kubernetes/charm-kubernetes-master/blob/d58cd51/config.yaml#L279

So we might as well change openstack-lb-overlay.yaml to connect kubernetes-master:kube-api-endpoint and kubernetes-worker:kube-api-endpoint and to use this config.

DETAILS:

Affected versions:
* charm-kubernetes-master 1.16.3
* bundle 785/3dc9ac9:
    * https://api.jujucharms.com/charmstore/v5/bundle/canonical-kubernetes-785/archive/bundle.yaml
    * https://github.com/charmed-kubernetes/bundle/blob/3dc9ac9/overlays/openstack-lb-overlay.yaml

Steps to reproduce:

# Set up Juju to talk to OpenStack:
juju add-cloud --local # see ​https://jaas.ai/docs/openstack-cloud
juju autoload-credentials
juju bootstrap openstack-on-lxd --model-default use-floating-ip=true
juju add-model kubernetes
mkdir kubernetes
cd kubernetes/
wget https://raw.githubusercontent.com/charmed-kubernetes/bundle/3dc9ac9/overlays/openstack-lb-overlay.yaml
wget https://api.jujucharms.com/charmstore/v5/bundle/canonical-kubernetes-785/archive/bundle.yaml
sed -i "s/num_units:../num_units: 1/g" bundle.yaml # optional: deploy with less units
juju deploy ./bundle.yaml --overlay ./openstack-lb-overlay.yaml --trust
watch -c juju status --color

Expected result: all units should eventually turn green
Actual result:

Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.0.8.215 Certificate Authority connected.
etcd/0* active idle 1 10.0.8.204 2379/tcp Healthy with 1 known peer
kubernetes-master/0* blocked idle 2 10.0.8.205 Waiting for kube-api-endpoint relation
  containerd/0* active idle 10.0.8.205 Container runtime available
  flannel/0* active idle 10.0.8.205 Flannel subnet 10.1.48.1/24
kubernetes-worker/0* waiting idle 3 10.0.8.222 Waiting for cluster DNS.
  containerd/1 active idle 10.0.8.222 Container runtime available
  flannel/1 active idle 10.0.8.222 Flannel subnet 10.1.59.1/24
openstack-integrator/0* active idle 4 10.0.8.202 Ready

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :
Revision history for this message
George Kraft (cynerva) wrote :

I believe this was fixed in the overlay: https://github.com/charmed-kubernetes/bundle/pull/768

no longer affects: charm-kubernetes-master
Changed in charmed-kubernetes-bundles:
importance: Undecided → Medium
status: New → Triaged
status: Triaged → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.