I have just hit this problem with OpenStack Yoga on Ubuntu Focal and Kubernetes 1.25.4. Troubleshooting led to the conclusion that the image for openstack-cloud-controller-manager is a bit outdated and probably either does not support manage-security-groups config option or there's something wrong with the logic for security groups.
Here's how to reproduce the problem and later update the image for openstack-cloud-controller-manager to prove the assumption.
2. Wait until EXTERNAL-IP is populated:
```
$ kubectl get svc cdk-cats
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cdk-cats LoadBalancer 10.152.183.148 172.27.82.77 80:30060/TCP 3m33s
```
3. Try to access the service using the Floating IP of the Load Balancer:
```
$ curl http://172.27.82.77/
curl: (52) Empty reply from server
```
^this is incorrect, we should be able to access the service already.
4. Verify that Load Balancer in OpenStack is OK:
```
$ openstack loadbalancer list -f yaml
- id: 467a4d7c-5f96-4084-bfd3-1da70068fa83
name: kube_service_kubernetes-jlpmnz587dqhnvezivi9crnyt9rtk0cf_default_cdk-cats
operating_status: ONLINE
project_id: e54528bf42fd43df90d0990147e617c2
provider: amphora
provisioning_status: ACTIVE
vip_address: 192.168.0.118
```
OK, looks good, it is active and online.
5. Check if the security group allowing access to kubernertes-worker nodes is present
```
$ openstack security group rule list | grep 30060
```
This is incorrect, security group should have been already created.
TROUBLESHOOTING
1. Check the cloud-config secret and make sure `manage-security-groups` is configured
```
$ kubectl get secret -o yaml -n kube-system cloud-config
All good, `manage-security-groups = True` is defined.
2. Check the image for openstack-cloud-controller-manager
```
$ kubectl get -o yaml ds openstack-cloud-controller-manager -n kube-system | grep image:
image: rocks.canonical.com:443/cdk/k8scloudprovider/openstack-cloud-controller-manager:v1.23.0
```
3. Update the image to more recent version
```
$ kubectl edit ds openstack-cloud-controller-manager -n kube-system
```
...and update the `image` key with `k8scloudprovider/openstack-cloud-controller-manager:v1.25.3`.
4. Recreate the deployment of the service with Load Balancer.
When done, check if LB works.
-> yes, now it works
Verify the presence of security group
-> yes, the security group is now present, allowing access to the port that the service is listening on:
I have just hit this problem with OpenStack Yoga on Ubuntu Focal and Kubernetes 1.25.4. Troubleshooting led to the conclusion that the image for openstack- cloud-controlle r-manager is a bit outdated and probably either does not support manage- security- groups config option or there's something wrong with the logic for security groups.
Here's how to reproduce the problem and later update the image for openstack- cloud-controlle r-manager to prove the assumption.
STEPS TO REPRODUCE
1. Deploy service with a Load Balancer:
``` tamp: null imestamp: null cdk-cats: latest
imagePullPol icy: ""
livenessProb e:
httpGet:
initialDel aySeconds: 5
timeoutSec onds: 30
resources: {} licy: Always countName: ""
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimes
labels:
app: cdk-cats
name: cdk-cats
spec:
replicas: 3
selector:
matchLabels:
app: cdk-cats
strategy: {}
template:
metadata:
creationT
labels:
app: cdk-cats
spec:
containers:
- image: calvinhartwell/
name: cdk-cats
ports:
- containerPort: 80
path: /
port: 80
restartPo
serviceAc
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: cdk-cats
spec:
type: LoadBalancer
selector:
app: cdk-cats
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
```
2. Wait until EXTERNAL-IP is populated:
```
$ kubectl get svc cdk-cats
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cdk-cats LoadBalancer 10.152.183.148 172.27.82.77 80:30060/TCP 3m33s
```
3. Try to access the service using the Floating IP of the Load Balancer: 172.27. 82.77/
```
$ curl http://
curl: (52) Empty reply from server
```
^this is incorrect, we should be able to access the service already.
4. Verify that Load Balancer in OpenStack is OK: 5f96-4084- bfd3-1da70068fa 83 kubernetes- jlpmnz587dqhnve zivi9crnyt9rtk0 cf_default_ cdk-cats f90d0990147e617 c2 status: ACTIVE
```
$ openstack loadbalancer list -f yaml
- id: 467a4d7c-
name: kube_service_
operating_status: ONLINE
project_id: e54528bf42fd43d
provider: amphora
provisioning_
vip_address: 192.168.0.118
```
OK, looks good, it is active and online.
5. Check if the security group allowing access to kubernertes-worker nodes is present
```
$ openstack security group rule list | grep 30060
```
This is incorrect, security group should have been already created.
TROUBLESHOOTING
1. Check the cloud-config secret and make sure `manage- security- groups` is configured
```
$ kubectl get secret -o yaml -n kube-system cloud-config
apiVersion: v1 kubernetes. io/last- applied- configuration: | "apiVersion" :"v1"," data":{ "cloud. conf":" W0dsb2JhbF. .. [REDACTED] "},"kind" :"Secret" ,"metadata" :{"annotations" :{},"labels" :{"cdk- addons" :"true" },"name" :"cloud- config" ,"namespace" :"kube- system" }} tamp: "2022-11- 17T19:32: 13Z" c976-4add- b37c-04a55d3e9e f5
data:
cloud.conf: W0dsb2JhbF... [REDACTED]
endpoint-ca.cert: LS0tLS1CRUdJTi... [REDACTED]
kind: Secret
metadata:
annotations:
kubectl.
{
creationTimes
labels:
cdk-addons: "true"
name: cloud-config
namespace: kube-system
resourceVersion: "148790"
uid: 7d84ccc7-
type: Opaque
$ echo 'W0dsb2JhbF... [REDACTED]' | base64 -d /keystone. orange. box:5000/ v3 endpoint- ca.cert
[Global]
auth-url = https:/
region = RegionOne
username = admin
password = [REDACTED]
tenant-name = admin
domain-name = admin_domain
tenant-domain-name = admin_domain
ca-file = /etc/config/
[LoadBalancer] d5dd-464d- b1cb-f39e4366db 9f 65bb-4b0b- b5c4-9037cdb505 36 security- groups = True
use-octavia = true
subnet-id = dd344c91-
floating-network-id = f52e5d35-
lb-method = ROUND_ROBIN
manage-
```
All good, `manage- security- groups = True` is defined.
2. Check the image for openstack- cloud-controlle r-manager cloud-controlle r-manager -n kube-system | grep image: .com:443/ cdk/k8scloudpro vider/openstack -cloud- controller- manager: v1.23.0
```
$ kubectl get -o yaml ds openstack-
image: rocks.canonical
```
3. Update the image to more recent version cloud-controlle r-manager -n kube-system er/openstack- cloud-controlle r-manager: v1.25.3` .
```
$ kubectl edit ds openstack-
```
...and update the `image` key with `k8scloudprovid
4. Recreate the deployment of the service with Load Balancer.
When done, check if LB works.
-> yes, now it works
Verify the presence of security group
-> yes, the security group is now present, allowing access to the port that the service is listening on:
``` 722a-4519- b67a-07cc15fd83 43 | tcp | IPv4 | 192.168.0.0/24 | 30088:30088 | ingress | None | None | fa940337- 7103-472e- 83b9-86791cc326 b9 |
$ openstack security group rule list | grep 30088
| d78d9120-
```
This leads to conclusion that the image for openstack- cloud-controlle r-manager should most likely be updated.