NAME READY AGE
statefulset.apps/csi-cinder-controllerplugin 1/1 21h
---
But the logs are still the same. It is complaining about the ProviderID not being of opstanck://InstanceID.
This is likely being setup by the k8s-worker charm, and I don't believe I can configure controller manager to ignore this without some under the hood modifications.
That has the same issue. I used the failure-domain label for the daemonset:
---
$ kubectl get all -n kube-system 6bf76f8dc5- z22gc 1/1 Running 0 21h cinder- controllerplugi n-0 4/4 Running 1 21h cinder- nodeplugin- 2nd24 2/2 Running 0 23m cinder- nodeplugin- kkv6d 2/2 Running 0 23m v1.6.0- beta.1- 6747db6947- mq8v8 4/4 Running 0 20h state-metrics- 7c765f4c5c- tvnhs 1/1 Running 0 21h server- v0.3.6- 75cd4549f8- 84tbv 2/2 Running 0 21h influxdb- grafana- v4-7f879555b- qw4fb 2/2 Running 0 21h cloud-controlle r-manager- q5hcn 1/1 Running 0 23m cloud-controlle r-manager- rfwzm 1/1 Running 0 23m
NAME READY STATUS RESTARTS AGE
pod/coredns-
pod/csi-
pod/csi-
pod/csi-
pod/heapster-
pod/kube-
pod/metrics-
pod/monitoring-
pod/openstack-
pod/openstack-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE csi-cinder- controller- service ClusterIP 10.152.183.93 <none> 12345/TCP 21h 53/TCP, 9153/TCP 21h kube-state- metrics ClusterIP 10.152.183.52 <none> 8080/TCP,8081/TCP 21h metrics- server ClusterIP 10.152.183.129 <none> 443/TCP 21h monitoring- grafana ClusterIP 10.152.183.44 <none> 80/TCP 21h monitoring- influxdb ClusterIP 10.152.183.231 <none> 8083/TCP,8086/TCP 21h
service/
service/heapster ClusterIP 10.152.183.40 <none> 80/TCP 21h
service/kube-dns ClusterIP 10.152.183.227 <none> 53/UDP,
service/
service/
service/
service/
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE apps/csi- cinder- nodeplugin 2 2 2 2 2 failure- domain. beta.kubernetes .io/region= RegionOne 21h apps/openstack- cloud-controlle r-manager 2 2 2 2 2 failure- domain. beta.kubernetes .io/region= RegionOne 21h
daemonset.
daemonset.
NAME READY UP-TO-DATE AVAILABLE AGE apps/coredns 1/1 1 1 21h apps/heapster- v1.6.0- beta.1 1/1 1 1 21h apps/kube- state-metrics 1/1 1 1 21h apps/metrics- server- v0.3.6 1/1 1 1 21h apps/monitoring -influxdb- grafana- v4 1/1 1 1 21h
deployment.
deployment.
deployment.
deployment.
deployment.
NAME DESIRED CURRENT READY AGE apps/coredns- 6bf76f8dc5 1 1 1 21h apps/heapster- v1.6.0- beta.1- 5cff8964b7 0 0 0 21h apps/heapster- v1.6.0- beta.1- 6747db6947 1 1 1 20h apps/heapster- v1.6.0- beta.1- 695cb4cb75 0 0 0 21h apps/kube- state-metrics- 7c765f4c5c 1 1 1 21h apps/metrics- server- v0.3.6- 75cd4549f8 1 1 1 21h apps/monitoring -influxdb- grafana- v4-7f879555b 1 1 1 21h
replicaset.
replicaset.
replicaset.
replicaset.
replicaset.
replicaset.
replicaset.
NAME READY AGE apps/csi- cinder- controllerplugi n 1/1 21h
statefulset.
---
But the logs are still the same. It is complaining about the ProviderID not being of opstanck: //InstanceID.
This is likely being setup by the k8s-worker charm, and I don't believe I can configure controller manager to ignore this without some under the hood modifications.
Please advise.