Environment variables are not being properly passed to the charm

Bug #1892255 reported by Camille Rodriguez
18
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Thomas Miller

Bug Description

In kubernetes charms, to be able to use the k8s python API, some environment variables are needed, and they exist in the operator pod, but the charm fails in the execution because it does not detect them.

For example, I ran some commands in python in the operator pod :
```
root@metallb-controller-operator-0:/var/lib/juju/agents/application-metallb-controller/charm# PYTHONPATH=venv python3
Python 3.8.2 (default, Jul 16 2020, 14:00:26)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from kubernetes import client, config
>>> config.load_incluster_config()
>>> policy_client = client.PolicyV1beta1Api()
>>> v1 = client.CoreV1Api()
>>> v1.list_namespace() {'api_version': 'v1',
 'items': [{'api_version': None,
            'kind': None,
            'metadata': {'annotations': {'juju.io/controller': '645a3377-72a3-446f-8925-
[...}
```

And this works just fine! But if I try to do the same thing in a charm, it fails on install with this :
Charm code in start hook:
```
    from kubernetes import client, config
    config.load_incluster_config()
    v1 = client.CoreV1Api()
    v1.list_namespace()
```
Errors:

```
application-metallb-controller: 12:41:16 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install Traceback (most recent call last):
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 20, in <module>
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install class MetallbCharm(CharmBase):
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 130, in MetallbCharm
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install config.load_incluster_config()
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install self._load_config()
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install raise ConfigException("Service host/port is not set.")
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install kubernetes.config.config_exception.ConfigException: Service host/port is not set.
```

A workaround is to explicitly pull the environment variables from the pod. :

```
    from kubernetes import client, config
    from pathlib import Path
    import os
    os.environ.update(
        dict(
            e.split("=")
            for e in Path("/proc/1/environ").read_text().split("\x00")
            if "KUBERNETES_SERVICE" in e
        )
    )

    config.load_incluster_config()
    v1 = client.CoreV1Api()
    v1.list_namespace()
```

Tags: k8s

Related branches

Ian Booth (wallyworld)
tags: added: k8s
Revision history for this message
Ian Booth (wallyworld) wrote :

Interestingly, the env vars are correctly picked up when executing on the workload rather than the operator:

$ juju exec --unit mariadb-k8s/0 env | grep KUBERNETES
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.152.183.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.152.183.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.152.183.1:443
KUBERNETES_SERVICE_HOST=10.152.183.1
$

$ juju exec --operator --unit mariadb-k8s/0 env | grep KUBERNETES
$

Ian Booth (wallyworld)
Changed in juju:
milestone: none → 2.8.3
importance: Undecided → High
status: New → Triaged
Revision history for this message
Ian Booth (wallyworld) wrote :

This PR https://github.com/juju/juju/pull/11924 fixes it but we're deciding whether to land it as it changes behaviour on non-k8s deploys and need to consider if it breaks anything unintentionally.

Changed in juju:
assignee: nobody → Ian Booth (wallyworld)
status: Triaged → In Progress
Revision history for this message
Ian Booth (wallyworld) wrote :

The fix has landed.

Changed in juju:
status: In Progress → Fix Committed
milestone: 2.8.3 → 2.8.2
Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

Thank you for the quick fix! I will be testing this shortly.

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

On the same topic, the env var "JUJU_OPERATOR_NAMESPACE" is not propagated correctly as well.

juju debug-log :

application-metallb-controller: 09:45:18 DEBUG unit.metallb-controller/0.install KeyError: 'JUJU_OPERATOR_NAMESPACE'
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install Traceback (most recent call last):
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 24, in <module>
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install class MetallbCharm(CharmBase):
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 26, in MetallbCharm
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install NAMESPACE = os.environ["JUJU_OPERATOR_NAMESPACE"]
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install File "/usr/lib/python3.8/os.py", line 675, in __getitem__
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install raise KeyError(key) from None
application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install KeyError: 'JUJU_OPERATOR_NAMESPACE'

$ microk8s.kubectl exec -it metallb-controller-operator-0 -n metallb -- /bin/bash
root@metallb-controller-operator-0:/var/lib/juju#
root@metallb-controller-operator-0:/var/lib/juju# env | grep metallb
JUJU_APPLICATION=metallb-controller
HOSTNAME=metallb-controller-operator-0
JUJU_OPERATOR_NAMESPACE=metallb

Revision history for this message
Ian Booth (wallyworld) wrote : Re: [Bug 1892255] Re: Environment variables are not being properly passed to the charm

JUJU_OPERATOR_NAMESPACE is an internal Juju env var used specifically on the
operator pod by Juju itself. It is not set up by Juju on the workload pod. It
should not be considered something that is used outside of Juju.

On 25/8/20 12:48 am, Camille Rodriguez wrote:
> On the same topic, the env var "JUJU_OPERATOR_NAMESPACE" is not propagated correctly as well.
>
> juju debug-log :
>
> application-metallb-controller: 09:45:18 DEBUG unit.metallb-controller/0.install KeyError: 'JUJU_OPERATOR_NAMESPACE'
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install Traceback (most recent call last):
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 24, in <module>
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install class MetallbCharm(CharmBase):
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 26, in MetallbCharm
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install NAMESPACE = os.environ["JUJU_OPERATOR_NAMESPACE"]
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install File "/usr/lib/python3.8/os.py", line 675, in __getitem__
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install raise KeyError(key) from None
> application-metallb-controller: 09:47:24 DEBUG unit.metallb-controller/0.install KeyError: 'JUJU_OPERATOR_NAMESPACE'
>
>
> $ microk8s.kubectl exec -it metallb-controller-operator-0 -n metallb -- /bin/bash
> root@metallb-controller-operator-0:/var/lib/juju#
> root@metallb-controller-operator-0:/var/lib/juju# env | grep metallb
> JUJU_APPLICATION=metallb-controller
> HOSTNAME=metallb-controller-operator-0
> JUJU_OPERATOR_NAMESPACE=metallb
>

Revision history for this message
Ian Booth (wallyworld) wrote :

Is there use case to set this env var on the workload pod?

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

I was looking for a way to refer to the kubernetes namespace name. I found "JUJU_MODEL_NAME" that could be used as well.

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

I tested with juju 2.8.2 and the issue is still occurring.

application-metallb-controller: 13:36:38 ERROR unit.metallb-controller/3.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 243, in <module>
    main(MetallbCharm)
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/ops/main.py", line 347, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/ops/main.py", line 123, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/ops/framework.py", line 212, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/ops/framework.py", line 624, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/ops/framework.py", line 667, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 122, in on_start
    self.create_pod_spec_with_k8s_api()
  File "./src/charm.py", line 133, in create_pod_spec_with_k8s_api
    self._load_kube_config()
  File "./src/charm.py", line 240, in _load_kube_config
    config.load_incluster_config()
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
    InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-metallb-controller-3/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

$ juju version
2.8.2-focal-amd64

Revision history for this message
Ian Booth (wallyworld) wrote :

A new 2.8.2 candidate snap was published a couple of days ago which contained the fix.
If the original 2.8.2 candidate snap is used, it will still have the problem.

Revision history for this message
Dominik Fleischmann (dominik.f) wrote :

I have been testing this with the latest 2.8.2 candidate (14051) and I'm still getting the same error as Camille. Am I missing something? This is on a new VM with everything installed today.

Changed in juju:
status: Fix Committed → Fix Released
Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

I am still seeing the same behavior with 2.8.4. The fix is not working.

application-metallb-controller: 14:41:40 ERROR unit.metallb-controller/1.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 189, in <module>
    main(MetallbControllerCharm)
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/main.py", line 378, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/main.py", line 123, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/framework.py", line 212, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/framework.py", line 624, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/framework.py", line 667, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 61, in on_start
    utils.create_pod_security_policy_with_api(namespace=self._stored.namespace)
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/src/utils.py", line 19, in create_pod_security_policy_with_api
    _load_kube_config()
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/src/utils.py", line 207, in _load_kube_config
    config.load_incluster_config()
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
    InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set
    self._load_config()
  File "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.
application-metallb-controller: 14:41:40 ERROR juju.worker.uniter.operation hook "start" (via hook dispatching script: dispatch) failed: exit status 1

Revision history for this message
John A Meinel (jameinel) wrote :
Download full text (3.8 KiB)

I notice that you're loading incluster_config.py from a kubernetes package
that you are supplying in your venv, rather than the one that Juju supplies
in the charm image.
I don't know whether that matters, but it could potentially be a source
of confusion and disagreement on what parameters need to be set/available.

Unfortunately the above line numbers clearly don't match the current
upstream

https://github.com/kubernetes-client/python-base/blob/fd322f70aa6c33782ab466afd4eaae271a27cbcb/config/incluster_config.py#L51

Though it might be line 60:
        if (SERVICE_HOST_ENV_NAME not in self._environ
                or SERVICE_PORT_ENV_NAME not in self._environ):
            raise ConfigException("Service host/port is not set.")

interestingly, those are SERVICE_HOST_ENV_NAME (not KUBERNETES_*). However
the actual PR:

https://github.com/juju/juju/pull/11924/files

Is just passing through all of the Environ settings that the Juju uniter
has.

It seems worthwhile to figure out if those are set in the environment and
we aren't passing them through, or if there is a reason they aren't getting
set at all.

On Mon, Sep 21, 2020 at 2:50 PM Camille Rodriguez <
<email address hidden>> wrote:

> I am still seeing the same behavior with 2.8.4. The fix is not working.
>
> application-metallb-controller: 14:41:40 ERROR
> unit.metallb-controller/1.juju-log Uncaught exception while in charm code:
> Traceback (most recent call last):
> File "./src/charm.py", line 189, in <module>
> main(MetallbControllerCharm)
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/main.py",
> line 378, in main
> _emit_charm_event(charm, dispatcher.event_name)
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/main.py",
> line 123, in _emit_charm_event
> event_to_emit.emit(*args, **kwargs)
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/framework.py",
> line 212, in emit
> framework._emit(event)
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/framework.py",
> line 624, in _emit
> self._reemit(event_path)
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/ops/framework.py",
> line 667, in _reemit
> custom_handler(event)
> File "./src/charm.py", line 61, in on_start
>
> utils.create_pod_security_policy_with_api(namespace=self._stored.namespace)
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/src/utils.py", line
> 19, in create_pod_security_policy_with_api
> _load_kube_config()
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/src/utils.py", line
> 207, in _load_kube_config
> config.load_incluster_config()
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/kubernetes/config/incluster_config.py",
> line 93, in load_incluster_config
> InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/kubernetes/config/incluster_config.py",
> line 45, in load_and_set
> self._load_config()
> File
> "/var/lib/juju/agents/unit-metallb-controller-1/charm/venv/kubernetes/config/incluster_config.py",
> line 51, i...

Read more...

Revision history for this message
Ian Booth (wallyworld) wrote :

With the fix, Juju now exposes all the env vars available when the hook is run. It seems some upstream k8s client libraries alias KUBERNETES_SERVICE_HOST to SERVICE_HOST_ENV_NAME. KUBERNETES_SERVICE_HOST is definitely exposed if available to the container environment.

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

>> It seems worthwhile to figure out if those are set in the environment and
we aren't passing them through, or if there is a reason they aren't getting
set at all.

My workaround to be able to use the kubernetes API is to update the env variables with the data found in /proc/1/environ. I am not setting these parameters manually, they exist in the host.

def _load_kube_config():
    # TODO: Remove this workaround when bug LP:1892255 is fixed
    from pathlib import Path
    os.environ.update(
        dict(
            e.split("=")
            for e in Path("/proc/1/environ").read_text().split("\x00")
            if "KUBERNETES_SERVICE" in e
        )
    )
    # end workaround
    config.load_incluster_config()

>>I notice that you're loading incluster_config.py from a kubernetes package
that you are supplying in your venv, rather than the one that Juju supplies
in the charm image.

Can you explain how I should load the package supplied by juju instead of installing the kubernetes python package?

>> KUBERNETES_SERVICE_HOST is definitely exposed if available to the container environment.

Did you test with kubernetes.config.load_incluster_config() ? It still does not work for me and I have been testing with the latest juju version.

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

As for the line number of the _load_config(self) function raised in the error I pasted, @john, it differs because you are looking at the master branch of the kubernetes api, while in my code it is using the version 10.0.1. I had not specified a specific version, I can always try with the latest stable api (11.0.0) and see if the error still occurs.

Revision history for this message
Camille Rodriguez (camille.rodriguez) wrote :

Same bug happens with k8s api 11.0.0

Revision history for this message
Tom Haddon (mthaddon) wrote :

I'm still experiencing this with juju 3.0-beta1-focal-amd64 and microk8s v1.19.4. To reproduce I ran 'charmcraft init' in an empty directory, and then added "from kubernetes import client, config" and then added this to `_on_config_changed`:

config.load_incluster_config()
v1 = client.CoreV1Api()
logger.debug("Found namespaces: {}".format(v1.list_namespace()))

I then ran `charmcraft build` and deployed that charm. I see the following in debug-logs:

application-charm-k8s-test: 10:09:44 INFO juju.worker.uniter awaiting error resolution for "config-changed" hook
application-charm-k8s-test: 10:09:45 ERROR unit.charm-k8s-test/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 44, in <module>
    main(CharmK8STestCharm)
  File "/var/lib/juju/agents/unit-charm-k8s-test-0/charm/venv/ops/main.py", line 402, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-charm-k8s-test-0/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-charm-k8s-test-0/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-charm-k8s-test-0/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-charm-k8s-test-0/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 30, in _on_config_changed
    config.load_incluster_config()
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/incluster_config.py", line 95, in load_incluster_config
    InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/incluster_config.py", line 47, in load_and_set
    self._load_config()
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/incluster_config.py", line 53, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.

However, running manually in the operator pod works fine.

$ microk8s.kubectl exec -ti -n myk8smodel charm-k8s-test-operator-0 -- /bin/bash
root@charm-k8s-test-operator-0:/var/lib/juju# cd agents/unit-charm-k8s-test-0/charm/
root@charm-k8s-test-operator-0:/var/lib/juju/agents/unit-charm-k8s-test-0/charm# PYTHONPATH=venv python3
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from kubernetes import client, config
>>> config.load_incluster_config()
>>> policy_client = client.PolicyV1beta1Api()
>>> v1 = client.CoreV1Api()
>>> v1.list_namespace()
{'api_version': 'v1',
 'items': [{'api_version': None,
            'kind': None,
            'metadata': {'annotations': None,
                         'cluster_name': None,
[...]

Changed in juju:
status: Fix Released → Confirmed
John A Meinel (jameinel)
Changed in juju:
status: Confirmed → Triaged
milestone: 2.8.2 → 2.8-next
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.8-next → 2.9.1
Revision history for this message
Thomas Miller (tlmiller) wrote :

Hey Tom & Camille,

Would you mind trying again for me on the lastest 2.9 edge release? Would you mind also checking you are on the latest Kubernetes release for the Python framework?

I believe this has been fixed but would like some field verification.

Cheers
tlm

Thomas Miller (tlmiller)
Changed in juju:
assignee: Ian Booth (wallyworld) → Thomas Miller (tlmiller)
Revision history for this message
Tom Haddon (mthaddon) wrote :
Download full text (5.4 KiB)

I'm still seeing the same problem with 2.9/edge:

mthaddon@tenaya:~/repos/k8s-charms/charm-test$ juju debug-log
controller-0: 09:24:32 INFO juju.worker.apicaller [15205d] "controller-0" successfully connected to "localhost:17070"
controller-0: 09:24:32 INFO juju.worker.logforwarder config change - log forwarding not enabled
controller-0: 09:24:32 INFO juju.worker.logger logger worker started
controller-0: 09:24:32 INFO juju.worker.pruner.action status history config: max age: 336h0m0s, max collection size 5120M for charm-test (15205d43-ed02-443b-818e-e1f11e722b06)
controller-0: 09:24:32 INFO juju.worker.pruner.statushistory status history config: max age: 336h0m0s, max collection size 5120M for charm-test (15205d43-ed02-443b-818e-e1f11e722b06)
controller-0: 09:25:15 INFO juju.worker.caasapplicationprovisioner.runner start "charm-test"
controller-0: 09:25:15 INFO juju.worker.caasapplicationprovisioner.runner stopped "charm-test", err: scaling application "charm-test" to desired scale 1: setting scale to 1 for "charm-test": scale patching statefulset "charm-test": statefulsets.apps "charm-test" not found
controller-0: 09:25:15 ERROR juju.worker.caasapplicationprovisioner.runner exited "charm-test": scaling application "charm-test" to desired scale 1: setting scale to 1 for "charm-test": scale patching statefulset "charm-test": statefulsets.apps "charm-test" not found
controller-0: 09:25:15 INFO juju.worker.caasapplicationprovisioner.runner restarting "charm-test" in 3s
controller-0: 09:25:18 INFO juju.worker.caasapplicationprovisioner.runner start "charm-test"
unit-charm-test-0: 09:25:53 INFO juju.worker.leadership charm-test/0 promoted to leadership of charm-test
unit-charm-test-0: 09:25:54 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-charm-test-0
unit-charm-test-0: 09:25:54 INFO juju.worker.uniter unit "charm-test/0" started
unit-charm-test-0: 09:25:54 INFO juju.worker.uniter resuming charm install
unit-charm-test-0: 09:25:54 INFO juju.worker.uniter.charm downloading local:focal/charm-test-0 from API server
unit-charm-test-0: 09:25:54 INFO juju.downloader downloading from local:focal/charm-test-0
unit-charm-test-0: 09:25:54 INFO juju.downloader download complete ("local:focal/charm-test-0")
unit-charm-test-0: 09:25:54 INFO juju.downloader download verified ("local:focal/charm-test-0")
unit-charm-test-0: 09:25:57 INFO juju.worker.uniter hooks are retried true
unit-charm-test-0: 09:25:57 INFO juju.worker.uniter found queued "install" hook
unit-charm-test-0: 09:25:57 INFO unit.charm-test/0.juju-log Running legacy hooks/install.
unit-charm-test-0: 09:25:58 INFO juju.worker.uniter.operation ran "install" hook (via hook dispatching script: dispatch)
unit-charm-test-0: 09:25:58 INFO juju.worker.uniter found queued "leader-elected" hook
unit-charm-test-0: 09:25:59 INFO juju.worker.uniter.operation ran "leader-elected" hook (via hook dispatching script: dispatch)
unit-charm-test-0: 09:26:00 INFO juju.worker.uniter.operation ran "placeholder-pebble-ready" hook (via hook dispatching script: dispatch)
unit-charm-test-0: 09:26:00 ERROR unit.charm-test/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote :

Thank you for testing this, Tom. Is the 'charm-test' just the same as comment
https://bugs.launchpad.net/juju/+bug/1892255/comments/19

eg:
config.load_incluster_config()
v1 = client.CoreV1Api()
logger.debug("Found namespaces: {}".format(v1.list_namespace()))

Revision history for this message
John A Meinel (jameinel) wrote :
Download full text (4.5 KiB)

To test this I just published 'jam-k8s-api-test' charm to charmhub. If I just do:

juju deploy jam-k8s-api-test

Then I see this in debug-log:
Env: APT_LISTCHANGES_FRONTEND: none
CHARM_DIR: /var/lib/juju/agents/unit-jam-k8s-api-test-1/charm
CLOUD_API_VERSION: 1.20.0
DEBIAN_FRONTEND: noninteractive
JUJU_AGENT_SOCKET_ADDRESS: @/var/lib/juju/agents/unit-jam-k8s-api-test-1/agent.socket
JUJU_AGENT_SOCKET_NETWORK: unix
JUJU_API_ADDRESSES: 10.152.183.124:17070 controller-service.controller-mk8s.svc.cluster.local:17070
JUJU_AVAILABILITY_ZONE:
JUJU_CHARM_DIR: /var/lib/juju/agents/unit-jam-k8s-api-test-1/charm
JUJU_CHARM_FTP_PROXY:
JUJU_CHARM_HTTPS_PROXY:
JUJU_CHARM_HTTP_PROXY:
JUJU_CHARM_NO_PROXY: 127.0.0.1,localhost,::1
JUJU_CONTEXT_ID: jam-k8s-api-test/1-config-changed-3966137603019967362
JUJU_DISPATCH_PATH: hooks/config-changed
JUJU_HOOK_NAME: config-changed
JUJU_MACHINE_ID:
JUJU_METER_INFO: not set
JUJU_METER_STATUS: AMBER
JUJU_MODEL_NAME: test2
JUJU_MODEL_UUID: c4cb10aa-b3dd-44df-8cc0-9ac348b15a69
JUJU_PRINCIPAL_UNIT:
JUJU_SLA: unsupported
JUJU_UNIT_NAME: jam-k8s-api-test/1
JUJU_VERSION: 2.9-rc11
LANG: C.UTF-8
OPERATOR_DISPATCH: 1
PATH: /var/lib/juju/tools/unit-jam-k8s-api-test-1:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD: /var/lib/juju/agents/unit-jam-k8s-api-test-1/charm
PYTHONPATH: lib:venv
TERM: tmux-256color
application-jam-k8s-api-test: 14:08:55 ERROR unit.jam-k8s-api-test/1.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 60, in <module>
    main(JamK8SApiTestCharm)
  File "/var/lib/juju/agents/unit-jam-k8s-api-test-1/charm/venv/ops/main.py", line 406, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-jam-k8s-api-test-1/charm/venv/ops/main.py", line 140, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-jam-k8s-api-test-1/charm/venv/ops/framework.py", line 278, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-jam-k8s-api-test-1/charm/venv/ops/framework.py", line 722, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-jam-k8s-api-test-1/charm/venv/ops/framework.py", line 767, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 54, in _on_config_changed
    config.load_incluster_config()
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/incluster_config.py", line 95, in load_incluster_config
    InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/incluster_config.py", line 47, in load_and_set
    self._load_config()
  File "/usr/local/lib/python3.8/dist-packages/kubernetes/config/incluster_config.py", line 53, in _load_config
    raise ConfigException("Service host/port is not set.")
kubernetes.config.config_exception.ConfigException: Service host/port is not set.

There are clearly lots of JUJU_ env vars, but I don't see any KUBERNETES_ env vars. If I SSH into the machine I can see:
$ juju ssh jam-k8s-api-test/1
# env
JUJU_OPERATOR_NAMESPACE=test2
KUBERNETES_PORT=tcp://10.152.183.1:443
KUBERNETES_SERVICE_PORT=443
MODELOPERA...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote :
Download full text (3.8 KiB)

This is also true if I set: min-juju-version: "2.8.0"
And it is also true if I do the debugging inside of DISPATCH rather than inside the charm.py code. (So it isn't operator framework that is stripping the env vars, but something Juju isn't passing in the first place).
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_UNIT_NAME=jam-k8s-api-test/2
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_VERSION=2.9-rc11
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_CHARM_HTTP_PROXY=
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed APT_LISTCHANGES_FRONTEND=none
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_CONTEXT_ID=jam-k8s-api-test/2-config-changed-5219041096659453671
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_AGENT_SOCKET_NETWORK=unix
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_API_ADDRESSES=10.152.183.124:17070 controller-service.controller-mk8s.svc.cluster.local:17070
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_CHARM_HTTPS_PROXY=
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_AGENT_SOCKET_ADDRESS=@/var/lib/juju/agents/unit-jam-k8s-api-test-2/agent.socket
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_MODEL_NAME=test2
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_DISPATCH_PATH=hooks/config-changed
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_AVAILABILITY_ZONE=
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_CHARM_DIR=/var/lib/juju/agents/unit-jam-k8s-api-test-2/charm
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed TERM=tmux-256color
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed PATH=/var/lib/juju/tools/unit-jam-k8s-api-test-2:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_METER_STATUS=AMBER
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_HOOK_NAME=config-changed
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed LANG=C.UTF-8
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed CLOUD_API_VERSION=1.20.0
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed DEBIAN_FRONTEND=noninteractive
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_SLA=unsupported
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_MODEL_UUID=c4cb10aa-b3dd-44df-8cc0-9ac348b15a69
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed JUJU_MACHINE_ID=
application-jam-k8s-api-test: 14:31:07 DEBUG unit.jam-k8s-api-test/2.config-changed ...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote :

I also created a jaw-aws-api-sidecar-test charm and uploaded it. It suffers from the same problem.
The reason Jon Seager's charm was working is because he does the same '/proc/1/environ/' trick
https://github.com/jnsgruk/charm-kubernetes-dashboard/blob/1f0d31bde941acd4e5fe0b88a6e63bd10386ba07/src/charm.py#L414

This should *definitely* be considered important for 2.9.0 if we want to consider 'juju trust' actually working.

Changed in juju:
milestone: 2.9.1 → 2.9-rc13
Ian Booth (wallyworld)
Changed in juju:
status: Triaged → In Progress
Ian Booth (wallyworld)
Changed in juju:
status: In Progress → Fix Committed
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.9-rc13 → 2.9.0
Changed in juju:
status: Fix Committed → Fix Released
Revision history for this message
Thomas Miller (tlmiller) wrote :

Hi All,

If you get a chance could you please confirm again that the bug has been resolved in latest Juju 2.9?

Cheers
tlm

Revision history for this message
Tom Haddon (mthaddon) wrote :

I can confirm this is fixed. Have an MP to update the nginx-ingress-integrator charm to remove the workaround, which I've tested locally.

Revision history for this message
Thomas Miller (tlmiller) wrote :
Download full text (4.4 KiB)

Thanks for the update.

On Thu, Jul 29, 2021 at 10:01 PM Tom Haddon <email address hidden>
wrote:

> I can confirm this is fixed. Have an MP to update the nginx-ingress-
> integrator charm to remove the workaround, which I've tested locally.
>
> --
> You received this bug notification because you are a bug assignee.
> https://bugs.launchpad.net/bugs/1892255
>
> Title:
> Environment variables are not being properly passed to the charm
>
> Status in juju:
> Fix Released
>
> Bug description:
> In kubernetes charms, to be able to use the k8s python API, some
> environment variables are needed, and they exist in the operator pod,
> but the charm fails in the execution because it does not detect them.
>
> For example, I ran some commands in python in the operator pod :
> ```
> root@metallb-controller-operator-0:/var/lib/juju/agents/application-metallb-controller/charm#
> PYTHONPATH=venv python3
>
>
> Python 3.8.2 (default, Jul 16 2020, 14:00:26)
>
> [GCC 9.3.0] on linux
>
> Type "help", "copyright", "credits" or "license" for more information.
> >>> from kubernetes import client, config
>
> >>> config.load_incluster_config()
>
> >>> policy_client = client.PolicyV1beta1Api()
>
> >>> v1 = client.CoreV1Api()
>
> >>> v1.list_namespace()
> {'api_version': 'v1',
>
> 'items': [{'api_version': None,
>
> 'kind': None,
>
> 'metadata': {'annotations': {'juju.io/controller':
> '645a3377-72a3-446f-8925-
> [...}
> ```
>
> And this works just fine! But if I try to do the same thing in a charm,
> it fails on install with this :
> Charm code in start hook:
> ```
> from kubernetes import client, config
> config.load_incluster_config()
> v1 = client.CoreV1Api()
> v1.list_namespace()
> ```
> Errors:
>
> ```
> application-metallb-controller: 12:41:16 ERROR
> juju.worker.uniter.operation hook "install" (via hook dispatching script:
> dispatch) failed: exit status 1
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install Traceback (most recent call last):
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install File "./src/charm.py", line 20, in
> <module>
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install class MetallbCharm(CharmBase):
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install File "./src/charm.py", line 130, in
> MetallbCharm
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install config.load_incluster_config()
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install File
> "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py",
> line 93, in load_incluster_config
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install
> InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
> application-metallb-controller: 12:43:56 DEBUG
> unit.metallb-controller/0.install File
> "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/co...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.