I'm running into this with 2.8.0. I added a check to all of the Kubeflow charms that looks like this:
if not hookenv.is_leader():
layer.status.blocked("this unit is not a leader")
return False
That left me with many copies of some charms:
argo-controller/0* active idle 10.1.48.187
argo-controller/1 blocked idle 10.1.90.124 this unit is not a leader
argo-controller/2 blocked idle 10.1.90.127 this unit is not a leader
argo-controller/3 blocked idle 10.1.90.128 this unit is not a leader
argo-controller/4 error idle 10.1.90.126 Started container tensorflow-serve
argo-controller/5 blocked idle 10.1.90.125 this unit is not a leader
argo-controller/6 blocked idle 10.1.90.132 this unit is not a leader
argo-controller/7 blocked idle 10.1.90.130 this unit is not a leader
argo-controller/8 blocked idle 10.1.90.133 this unit is not a leader
Other charms worked fine, though. I'm not really sure what is triggering this behavior
I'm running into this with 2.8.0. I added a check to all of the Kubeflow charms that looks like this:
if not hookenv. is_leader( ): status. blocked( "this unit is not a leader")
layer.
return False
That left me with many copies of some charms:
argo-controller/0* active idle 10.1.48.187
argo-controller/1 blocked idle 10.1.90.124 this unit is not a leader
argo-controller/2 blocked idle 10.1.90.127 this unit is not a leader
argo-controller/3 blocked idle 10.1.90.128 this unit is not a leader
argo-controller/4 error idle 10.1.90.126 Started container tensorflow-serve
argo-controller/5 blocked idle 10.1.90.125 this unit is not a leader
argo-controller/6 blocked idle 10.1.90.132 this unit is not a leader
argo-controller/7 blocked idle 10.1.90.130 this unit is not a leader
argo-controller/8 blocked idle 10.1.90.133 this unit is not a leader
Other charms worked fine, though. I'm not really sure what is triggering this behavior