Shouldn't wait for kube-api-endpoint relation if openstack relation exists
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Charmed Kubernetes Bundles |
Fix Released
|
Medium
|
Unassigned |
Bug Description
When deploying the CDK bundle ( https:/
```
kubernetes-
```
This is because normally this relation is implemented by the kubeapi-
Here is where the 'blocked' status gets reported: https:/
I suggest we stop reporting this 'blocked' status and allow the charm to turn green in 'juju status' output, at least in the case where the 'openstack' relation is implemented.
What do you think?
TL;DR: That relation actually is important, and the blocked status is probably correct.
It's been a bit since I worked on this. I recall there being a configuration which allows this to work, but it seems like something perhaps got missed from the bundle fragment and charm code.
The underlying issue is that the way the kube-api-endpoint relation is used is rather messy. The intention is that it is the relation that provides the API addresses for other services, most importantly the workers, to connect to. Conversely, the loadbalancer relation is intended to allow the master to request loadbalancer addresses to advertise as the API addresses. In theory, that should mean that the master would get LB addresses from the loadbalancer relation[1], if present, and then send either the LB addresses or its direct, internal address to the workers over the kube-api-endpoint relation.
In current practice, however, kubeapi- load-balancer gets injected in between the masters and workers on the kube-api-endpoint relation, and then expects the master to always send the internal addresses, which it then uses to create LBs and then sends along the LB addresses rather than the internal addresses it was given, in effect acting like a modifying proxy.[2] It then also sends the created LB addresses back to the master over the loadbalancer relation, so that the master can advertise that to external clients as well via the kubeconfig. This means that instead of using the request addresses on the loadbalancer relation to create the loadbalancers, kubeapi- load-balancer crosses the wire a bit between those two relations, tying them together in a way that it shouldn't really do.
I think the best thing would be to clean up how those relations are used by kubeapi- load-balancer, but there will be upgrade transition issues in sorting that out, so that will require some thinking. (Perhaps it would just be enough to have kubeapi- load-balancer turn the kube-api-endpoint relation into a straight proxy and switch the master to always send the LB addresses, but that would require that the charms be upgraded at the same time.)
Again, I could have sworn when I left this that there was a configuration that worked and didn't run into this issue, but I can't recall what that was at this point, and looking at it now, it seems like it needs to be fixed.
[1]: It's also possible to get LB addresses from manual config, or haproxy, but in the end, the effect ought to be the same no matter where the LB addresses come from.
[2]: It's also complicated by the fact that the manual LB or haproxy config options might be passed instead so that they act as an external front to the LB managed by kubeapi- load-balancer.