In kubernetes charms, to be able to use the k8s python API, some environment variables are needed, and they exist in the operator pod, but the charm fails in the execution because it does not detect them.
For example, I ran some commands in python in the operator pod :
```
root@metallb-controller-operator-0:/var/lib/juju/agents/application-metallb-controller/charm# PYTHONPATH=venv python3
Python 3.8.2 (default, Jul 16 2020, 14:00:26)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from kubernetes import client, config
>>> config.load_incluster_config()
>>> policy_client = client.PolicyV1beta1Api()
>>> v1 = client.CoreV1Api()
>>> v1.list_namespace() {'api_version': 'v1',
'items': [{'api_version': None,
'kind': None,
'metadata': {'annotations': {'juju.io/controller': '645a3377-72a3-446f-8925-
[...}
```
And this works just fine! But if I try to do the same thing in a charm, it fails on install with this :
Charm code in start hook:
```
from kubernetes import client, config
config.load_incluster_config()
v1 = client.CoreV1Api()
v1.list_namespace()
```
Errors:
```
application-metallb-controller: 12:41:16 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install Traceback (most recent call last):
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 20, in <module>
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install class MetallbCharm(CharmBase):
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 130, in MetallbCharm
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install config.load_incluster_config()
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install self._load_config()
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install raise ConfigException("Service host/port is not set.")
application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install kubernetes.config.config_exception.ConfigException: Service host/port is not set.
```
A workaround is to explicitly pull the environment variables from the pod. :
```
from kubernetes import client, config
from pathlib import Path
import os
os.environ.update(
dict(
e.split("=")
for e in Path("/proc/1/environ").read_text().split("\x00")
if "KUBERNETES_SERVICE" in e
)
)
config.load_incluster_config()
v1 = client.CoreV1Api()
v1.list_namespace()
```
Interestingly, the env vars are correctly picked up when executing on the workload rather than the operator:
$ juju exec --unit mariadb-k8s/0 env | grep KUBERNETES SERVICE_ PORT=443 PORT=tcp: //10.152. 183.1:443 PORT_443_ TCP_ADDR= 10.152. 183.1 PORT_443_ TCP_PORT= 443 PORT_443_ TCP_PROTO= tcp SERVICE_ PORT_HTTPS= 443 PORT_443_ TCP=tcp: //10.152. 183.1:443 SERVICE_ HOST=10. 152.183. 1
KUBERNETES_
KUBERNETES_
KUBERNETES_
KUBERNETES_
KUBERNETES_
KUBERNETES_
KUBERNETES_
KUBERNETES_
$
$ juju exec --operator --unit mariadb-k8s/0 env | grep KUBERNETES
$