2020-08-19 09:22:38 |
Hemanth Nakkina |
bug |
|
|
added bug |
2020-08-19 09:22:55 |
Hemanth Nakkina |
tags |
|
seg |
|
2020-08-19 10:40:47 |
Edward Hope-Morley |
description |
Updation of openstack-integrator charm options have no effect since the corresponding cdk-addon deployments/daemonsets are not rolled out.
Steps to reproduce:
1. Deploy k8s using Charmed kubernetes
By default, openstack-integrator charm option manage-security-groups is false
2. Update the manage-security-groups option to true
juju config openstack-integrator manage-security-groups=true
3. Wait for the juju units to be back to idle
4. Check the k8s secret cloud-config is updated with the new option
kubectl -n kube-system get secret cloud-config -o json | jq .data[] | tr -d '"' | base64 -d
5. Check if the openstack-cloud-controller-manager pods got restarted
kubectl -n kube-system get po | grep openstack-cloud-controller-manager
So the configuration is set to the necessary secret file but as the pod is not restarted the modified configuration is not effective.
Verified this with deploying a Loadbalancer service, security groups to allow traffic between LB Amphora VM and k8s worker nodes are not created.
Manually rolling out the openstack-cloud-controller-manager services made the modified configuration effective.
kubectl -n kube-system rollout restart ds/openstack-cloud-controller-manager |
Modifying openstack-integrator charm options has no effect since the corresponding cdk-addon deployments/daemonsets are not rolled out.
Steps to reproduce:
1. Deploy k8s using Charmed kubernetes
By default, openstack-integrator charm option manage-security-groups is false
2. Update the manage-security-groups option to true
juju config openstack-integrator manage-security-groups=true
3. Wait for the juju units to be back to idle
4. Check the k8s secret cloud-config is updated with the new option
kubectl -n kube-system get secret cloud-config -o json | jq .data[] | tr -d '"' | base64 -d
5. Check if the openstack-cloud-controller-manager pods got restarted
kubectl -n kube-system get po | grep openstack-cloud-controller-manager
So the configuration is set to the necessary secret file but as the pod is not restarted the modified configuration is not effective.
Verified this with deploying a Loadbalancer service, security groups to allow traffic between LB Amphora VM and k8s worker nodes are not created.
Manually rolling out the openstack-cloud-controller-manager services made the modified configuration effective.
kubectl -n kube-system rollout restart ds/openstack-cloud-controller-manager |
|
2020-08-19 14:29:10 |
George Kraft |
charm-kubernetes-master: importance |
Undecided |
Medium |
|
2020-08-19 14:29:13 |
George Kraft |
charm-kubernetes-master: status |
New |
Triaged |
|
2020-08-20 15:46:11 |
Cory Johns |
charm-kubernetes-master: assignee |
|
Cory Johns (johnsca) |
|
2021-03-23 16:37:01 |
George Kraft |
summary |
Update in openstack-integrator charm option manage-security-group does not rollout openstack-cloud-controller-manager pods |
Update in openstack-integrator charm options do not rollout openstack-cloud-controller-manager pods |
|
2021-07-22 05:40:27 |
Nobuto Murata |
bug |
|
|
added subscriber Nobuto Murata |