Juju k8s controller is not getting configuration parameters correctly

Bug #1847084 reported by David
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Ian Booth

Bug Description

Hello,

We have faced a bug in bootstrapping a juju controller into k8s (microk8s), where the values in the configuration file specified when bootstrapping the juju controller are ignored.

Steps to reproduce it
===========================

1. Create a controller.yaml file with the configuration parameters.

no-proxy: localhost,127.0.0.1,10.48.129.103,10.152.183.0/24
apt-http-proxy: http://squid.internal:3128
apt-https-proxy: http://squid.internal:3128
apt-ftp-proxy: http://squid.internal:3128
juju-http-proxy: http://squid.internal:3128
juju-https-proxy: http://squid.internal:3128
juju-ftp-proxy: http://squid.internal:3128

2. Bootstrap k8s controller

juju bootstrap microk8s osm-on-k8s --config=controller.yaml --debug -v

Full log: https://pastebin.canonical.com/p/fyQX4FrVsq/
Outstanding output from that log:

  apt-ftp-proxy: http://squid.internal:3128
  apt-http-proxy: http://squid.internal:3128
  apt-https-proxy: http://squid.internal:3128
  apt-no-proxy: ""
  ftp-proxy: ""
  http-proxy: ""
  https-proxy: ""
  juju-ftp-proxy: http://squid.internal:3128
  juju-http-proxy: http://squid.internal:3128
  juju-https-proxy: http://squid.internal:3128
  juju-no-proxy: 127.0.0.1,localhost,::1
  no-proxy: localhost,127.0.0.1,10.48.129.103,10.152.183.0/24
  proxy-ssh: false
  snap-http-proxy: ""
  snap-https-proxy: ""
  snap-store-proxy-url: ""
  snap-store-proxy: ""

3. Add model

juju add-model osm

I tried with `juju add-model osm --config controller.yaml`, but I got the same result.

4. Show controller and model configuration:

juju controller-config (https://pastebin.canonical.com/p/gpfshsDMFd/)
juju model-config (https://pastebin.canonical.com/p/jtyWR4jFDp/)

5. Enter to the controller pod:
kubectl -n controller-osm-on-k8s exec -it controller-0 -c api-server bash
root@controller-0:/var/lib/juju# apt update
root@controller-0:/var/lib/juju# apt install curl -y
root@controller-0:/var/lib/juju# curl https://google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>

If it was using the https_proxy I specified, I should be getting this output:
davigar15@Canonical:~$ export https_proxy=http://squid.internal:3128
davigar15@Canonical:~$ curl https://google.com
curl: (56) Received HTTP code 403 from proxy after CONNECT

Revision history for this message
David (davigar15) wrote :

In the logs you'll see I was using 2.7-beta1, but I'm getting the same result with juju 2.6/stable

Changed in juju:
status: New → Triaged
importance: Undecided → High
milestone: none → 2.7-beta1
Revision history for this message
Harry Pidcock (hpidcock) wrote :

Can we get some more information about this please.

What do you expect to happen with the proxy env vars. Should they be applied to just the controller or do you want them injected into every container on every pod managed by Juju?

Also in relation to the k8s cluster, is it just microk8s or is it also intended to work on CDK/GKE/AKS/EKS etc.

Where would the proxy server be running? Inside the cluster or outside? If inside is it running as a service?

Is the cluster using Istio or a CNI like Calico or Flannel?

Changed in juju:
status: Triaged → Incomplete
Revision history for this message
Tim Penhey (thumper) wrote : Re: [Bug 1847084] Re: Juju k8s controller is not getting configuration parameters correctly

I think what we have here is a misunderstanding between --config and
--model-defaults

Configuration that is passed in with --config applies JUST to the
controller model.

If you are wanting particular configuration to apply to new models as
well, then you need to specify --model-defaults.

Any config that is passed in to model-defaults will also apply to the
controller model.

Tim

On 8/10/19 1:23 PM, Harry Pidcock wrote:
> Can we get some more information about this please.
>
> What do you expect to happen with the proxy env vars. Should they be
> applied to just the controller or do you want them injected into every
> container on every pod managed by Juju?
>
> Also in relation to the k8s cluster, is it just microk8s or is it also
> intended to work on CDK/GKE/AKS/EKS etc.
>
> Where would the proxy server be running? Inside the cluster or outside?
> If inside is it running as a service?
>
> Is the cluster using Istio or a CNI like Calico or Flannel?
>
> ** Changed in: juju
> Status: Triaged => Incomplete
>

Harry Pidcock (hpidcock)
Changed in juju:
assignee: nobody → Harry Pidcock (hpidcock)
Revision history for this message
David (davigar15) wrote :

Hello Harry,

The proxy env vars should be applied to the controller and also injected into every container on every pod managed by Juju.

It is intended to work on every k8s cluster, but it seems to be a k8s independent issue.

The proxy server is running outside the cluster

The cluster is not using Istio or a CNI like Calico or Flannel.

For more information, the issue we're having in Field is that when we do juju deploy osm into that model, the charms of the bundle are resolved correctly, but when the first one is going to be uploaded, we're getting this output:

Get https://api.jujucharms.com/charmstore/v5/~charmed-osm/grafana-k8s-21/meta/any?include=id&include=supported-series&include=published: dial tcp 162.213.33.122:443: i/o timeout

Revision history for this message
Tim Penhey (thumper) wrote :

Hi @David,

In the past I may have agreed with you, but Juju did change for a reason.

Originally Juju just supported the http_proxy type model config, and those values were written to the machine's /etc/environment.d directory to be included in the standard system environment, but there were too many weird edge cases to support around that behaviour.

New model config variables were introduced prefixed with juju_ (juju_http_proxy and its ilk). These configuration values were exported as part of the charm environment (including the JUJU_ prefix). It was then up to the charm to decide whether to use the proxy or not. Sometimes the charm knew that for internal communication, no proxies were used, and the proxy was only needed for access outside the model.

For this reason, Juju does not automatically set the proxy variables for any workload. If the charm knows that it needs to use the proxies if they are there, then it should be the charm's responsibility to set the proxy variables in the podspec for the container.

Revision history for this message
Ian Booth (wallyworld) wrote :

Tim's answer above is relevant to charm deployed workloads, but one thing he didn't realise is that we don't currently run the proxy update worker in a k8s controller. So the controller internals, including the bit that connects to the charm store, do not use a configured proxy.

Whether proxy env vars are the best approach in general for k8s is a point that needs considering, as opposed to possibly more idiomatic solutions. As a workaround for now, you could try:

1. bootstrap the controller
2. edit the controller statefulset to amend the podspec to include the proxy env vars
3. let k8s restart the controller pod
4. check that the controller pod has the proxy env vars set and try a deploy again

Revision history for this message
Ian Booth (wallyworld) wrote :

@david, did the suggested workaround solve the issue? If so we can look to do a fix in juju

Revision history for this message
David (davigar15) wrote :

Hello Ian! Sorry for the delay. I was able to test it today.

The JUJU_HTTPS_PROXY did not work. What it worked was putting the HTTPS_PROXY.

So variables with JUJU_ were ignored.

Revision history for this message
Richard Harding (rharding) wrote :

@davidgar15

This is intended. The idea is that the user sets what HTTP proxy is available for use, but Juju only populates the JUJU_HTTPS_PROXY env var. Then the charm can decide if it needs to use the proxy or not. e.g. if the communication is for one unit to hit an HTTP API on another unit it doesn't need the proxy. However, if the unit it trying to reach the public internet it probably does use it. This is why we have the split in different proxy variables.

When Juju was only setting the HTTPS_PROXY env var it became a mess because then there was a ton of NO_PROXY setup that needed working so that other aspects would operate correctly.

Revision history for this message
Ian Booth (wallyworld) wrote :

@rick, JUJU_HTTPS_PROXY is the preferred way to do it, but k8s controllers do not have the proxy update worker wired up yet. Using non-Juju vars is legacy and deprecated.

Revision history for this message
David (davigar15) wrote :

Any update on this? Do you guys need anything more from me?

Revision history for this message
Ian Booth (wallyworld) wrote :

k8s controllers now honour the "juju-" prefixed proxy model config

Changed in juju:
assignee: Harry Pidcock (hpidcock) → Ian Booth (wallyworld)
status: Incomplete → Fix Committed
Changed in juju:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.