Update in openstack-integrator charm options do not rollout openstack-cloud-controller-manager pods

Bug #1892164 reported by Hemanth Nakkina
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Kubernetes Control Plane Charm
Triaged
Medium
Cory Johns

Bug Description

Modifying openstack-integrator charm options has no effect since the corresponding cdk-addon deployments/daemonsets are not rolled out.

Steps to reproduce:

1. Deploy k8s using Charmed kubernetes
   By default, openstack-integrator charm option manage-security-groups is false
2. Update the manage-security-groups option to true
   juju config openstack-integrator manage-security-groups=true
3. Wait for the juju units to be back to idle
4. Check the k8s secret cloud-config is updated with the new option
   kubectl -n kube-system get secret cloud-config -o json | jq .data[] | tr -d '"' | base64 -d
5. Check if the openstack-cloud-controller-manager pods got restarted
   kubectl -n kube-system get po | grep openstack-cloud-controller-manager

So the configuration is set to the necessary secret file but as the pod is not restarted the modified configuration is not effective.
Verified this with deploying a Loadbalancer service, security groups to allow traffic between LB Amphora VM and k8s worker nodes are not created.

Manually rolling out the openstack-cloud-controller-manager services made the modified configuration effective.
kubectl -n kube-system rollout restart ds/openstack-cloud-controller-manager

Tags: seg
tags: added: seg
description: updated
George Kraft (cynerva)
Changed in charm-kubernetes-master:
importance: Undecided → Medium
status: New → Triaged
Cory Johns (johnsca)
Changed in charm-kubernetes-master:
assignee: nobody → Cory Johns (johnsca)
Revision history for this message
Cory Johns (johnsca) wrote :

I should note that the documentation for manage-security-groups [1] states that it is ignored for Octavia so it doesn't seem like this is actually an issue with the configuration not being applied. In a related bug [2], the thinking was that the integrator charm would specifically need to create SG rules to allow NodePort ingress from within the subnet but this seems to indicate that simply restarting the openstack-cloud-controller-manager services might fix it, and another issue that's been opened [3] seems to indicate that perhaps the SG rules out to be unnecessary regardless.

[1]: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-openstack-cloud-controller-manager.md#load-balancer
[2]: https://bugs.launchpad.net/charm-kubernetes-master/+bug/1884995
[3]: https://bugs.launchpad.net/charm-openstack-integrator/+bug/1893512

Revision history for this message
Cory Johns (johnsca) wrote :

After further discussion with Ed, this is not specifically related to the manage-security-groups config but rather just to ensure that changes to the integrator config get propagated properly to K8s and applied to the pods (with restarts if needed).

George Kraft (cynerva)
summary: - Update in openstack-integrator charm option manage-security-group does
- not rollout openstack-cloud-controller-manager pods
+ Update in openstack-integrator charm options do not rollout openstack-
+ cloud-controller-manager pods
Revision history for this message
Cory Johns (johnsca) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.