Does create a profile with the correct values. However, the profile is applied too late by Juju so it does not have any effect. I guess they have made the kernel module loading work ad-hoc for subordinate profiles without taking other keys into account.
Adding it to the charm-octavia lxd-profile.yaml does get it applied, but unfortunately does also not solve the problem. LXD only provides the container name in the cloud-init NoCloud seed meta-data.
So we're back to square one in Juju needing to provide the fqdn it knows from MAAS when creating the container.
We do not have access to these knobs dynamically from charms.
Adding this to charm-ovn-chassis: profile. yaml b/src/lxd- profile. yaml profile. yaml profile. yaml kernel_ modules: openvswitch
$ git diff
diff --git a/src/lxd-
index 044e653..7114b0f 100644
--- a/src/lxd-
+++ b/src/lxd-
@@ -1,2 +1,5 @@
config:
linux.
+ user.vendor-data: |
+ #cloud-config
+ manage_etc_hosts: true
Does create a profile with the correct values. However, the profile is applied too late by Juju so it does not have any effect. I guess they have made the kernel module loading work ad-hoc for subordinate profiles without taking other keys into account.
Adding it to the charm-octavia lxd-profile.yaml does get it applied, but unfortunately does also not solve the problem. LXD only provides the container name in the cloud-init NoCloud seed meta-data.
So we're back to square one in Juju needing to provide the fqdn it knows from MAAS when creating the container.
We do not have access to these knobs dynamically from charms.