Deploying k8s 1.30/beta with the 1.30 aws-k8s-storage charm, the charm becomes blocked saying "aws-ebs-csi-driver: Deployment/kube-system/ebs-csi-controller is not Progressing, aws-ebs-csi-driver: PodDisruptionBudget/kube-system/ebs-csi-controller is not DisruptionAllowed" and the k8s-cp's both say they are waiting for cloud integration. The Juju status shows that we're running `aws-ebs-csi-driver=v1.12.0`
Looking at the logs on machine 10, I can see from the node-driver-registrar that the pod is having a problem connecting to the socket, with an hour of "Still connecting to unix:///csi/csi.sock"
and the ebs-plugin logs show that its failing to connect to the internal API:
2024-04-29T19:24:36.661352769Z stderr F panic: error getting Node ip-172-31-32-97.ec2.internal: Get "https://10.152.183.1:443/api/v1/nodes/ip-172-31-32-97.ec2.internal": dial tcp 10.152.183.1:443: i/o timeout
2024-04-29T19:24:36.661383986Z stderr F
2024-04-29T19:24:36.661386809Z stderr F goroutine 1 [running]:
2024-04-29T19:24:36.661388842Z stderr F github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver.newNodeService(0xc00003e7d0)
2024-04-29T19:24:36.661390978Z stderr F /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/node.go:101 +0x345
2024-04-29T19:24:36.66139397Z stderr F github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver.NewDriver({0xc000569f30, 0x8, 0x31c5b50?})
2024-04-29T19:24:36.66140506Z stderr F /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/driver.go:95 +0x393
2024-04-29T19:24:36.661407205Z stderr F main.main()
2024-04-29T19:24:36.661409056Z stderr F /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/cmd/main.go:46 +0x37d
Logs are attached, but a test run can be found at:
https://solutions.qa.canonical.com/testruns/b135cf2f-31ca-436a-8ca3-c88229c6bbb4/ (logs attached)
and
https://solutions.qa.canonical.com/testruns/5681ff1d-d775-4c7c-b263-d4361e0fc08d/
for a focal and jammy run respectively.