Comment 7 for bug 2060943

Revision history for this message
Ian Booth (wallyworld) wrote :

Thanks for the logging. We do have some known issues in 3.x due to the change to use async download for charms sourced from charmhub. What might be happening is that the unit agent comes up and tries to download from the controller the charm it needs. But at the moment it does that, the controller has not yet fully finished downloading the charm blob from charmhub (the controller acts as a man in the middle to cache downloaded charms). This causes the agent to see an error. It bounces and by the time it comes back up, the charm is in the controller and the next attempt to download works. But....

The above might explain this which indicates there's an issue which will break things (perhaps the aborted download triggered it):

controller-0: 12:24:18 INFO juju.apiserver.connection agent login: application-kubeflow-volumes for 4fde2d04-61f5-4c0d-8806-e7a17ec28548
controller-0: 12:24:30 ERROR juju.worker.dependency "caas-unit-provisioner" manifold worker returned unexpected error: panic resulted in: assignment to entry in nil map
controller-0: 12:24:36 ERROR juju.worker.dependency "caas-unit-provisioner" manifold worker returned unexpected error: panic resulted in: assignment to entry in nil map
controller-0: 12:24:41 ERROR juju.worker.dependency "caas-unit-provisioner" manifold worker returned unexpected error: panic resulted in: assignment to entry in nil map
controller-0: 12:24:42 INFO juju.state LogTailer starting oplog tailing: recent id count=10, lastTime=2024-04-11 10:24:33.097989553 +0000 UTC, minOplogTs=2024-04-11 10:23:33.097989553 +0000 UTC
controller-0: 12:24:48 ERROR juju.worker.dependency "caas-unit-provisioner" manifold worker returned unexpected error: panic resulted in: assignment to entry in nil map
controller-0: 12:24:48 INFO juju.state LogTailer starting oplog tailing: recent id count=106, lastTime=2024-04-11 10:24:33.097989553 +0000 UTC, minOplogTs=2024-04-11 10:23:33.097989553 +0000 UTC
controller-0: 12:24:56 ERROR juju.worker.dependency "caas-unit-provisioner" manifold worker returned unexpected error: panic resulted in: assignment to entry in nil map

Is it possible to use kubectl log to get the stdout/stderr from the api-server container? We need to get to the source of the panic. We need a log file with the stack trace in it. If kubectl log comes up empty, maybe you could ssh into the container and poke around /var/log/juju