Comment 5 for bug 1878097

Revision history for this message
Jeff Hillman (jhillman) wrote :

That has the same issue. I used the failure-domain label for the daemonset:

---

$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-6bf76f8dc5-z22gc 1/1 Running 0 21h
pod/csi-cinder-controllerplugin-0 4/4 Running 1 21h
pod/csi-cinder-nodeplugin-2nd24 2/2 Running 0 23m
pod/csi-cinder-nodeplugin-kkv6d 2/2 Running 0 23m
pod/heapster-v1.6.0-beta.1-6747db6947-mq8v8 4/4 Running 0 20h
pod/kube-state-metrics-7c765f4c5c-tvnhs 1/1 Running 0 21h
pod/metrics-server-v0.3.6-75cd4549f8-84tbv 2/2 Running 0 21h
pod/monitoring-influxdb-grafana-v4-7f879555b-qw4fb 2/2 Running 0 21h
pod/openstack-cloud-controller-manager-q5hcn 1/1 Running 0 23m
pod/openstack-cloud-controller-manager-rfwzm 1/1 Running 0 23m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-cinder-controller-service ClusterIP 10.152.183.93 <none> 12345/TCP 21h
service/heapster ClusterIP 10.152.183.40 <none> 80/TCP 21h
service/kube-dns ClusterIP 10.152.183.227 <none> 53/UDP,53/TCP,9153/TCP 21h
service/kube-state-metrics ClusterIP 10.152.183.52 <none> 8080/TCP,8081/TCP 21h
service/metrics-server ClusterIP 10.152.183.129 <none> 443/TCP 21h
service/monitoring-grafana ClusterIP 10.152.183.44 <none> 80/TCP 21h
service/monitoring-influxdb ClusterIP 10.152.183.231 <none> 8083/TCP,8086/TCP 21h

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/csi-cinder-nodeplugin 2 2 2 2 2 failure-domain.beta.kubernetes.io/region=RegionOne 21h
daemonset.apps/openstack-cloud-controller-manager 2 2 2 2 2 failure-domain.beta.kubernetes.io/region=RegionOne 21h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 1/1 1 1 21h
deployment.apps/heapster-v1.6.0-beta.1 1/1 1 1 21h
deployment.apps/kube-state-metrics 1/1 1 1 21h
deployment.apps/metrics-server-v0.3.6 1/1 1 1 21h
deployment.apps/monitoring-influxdb-grafana-v4 1/1 1 1 21h

NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-6bf76f8dc5 1 1 1 21h
replicaset.apps/heapster-v1.6.0-beta.1-5cff8964b7 0 0 0 21h
replicaset.apps/heapster-v1.6.0-beta.1-6747db6947 1 1 1 20h
replicaset.apps/heapster-v1.6.0-beta.1-695cb4cb75 0 0 0 21h
replicaset.apps/kube-state-metrics-7c765f4c5c 1 1 1 21h
replicaset.apps/metrics-server-v0.3.6-75cd4549f8 1 1 1 21h
replicaset.apps/monitoring-influxdb-grafana-v4-7f879555b 1 1 1 21h

NAME READY AGE
statefulset.apps/csi-cinder-controllerplugin 1/1 21h

---

But the logs are still the same. It is complaining about the ProviderID not being of opstanck://InstanceID.

This is likely being setup by the k8s-worker charm, and I don't believe I can configure controller manager to ignore this without some under the hood modifications.

Please advise.