Comment 4 for bug 1871388

Revision history for this message
Narinder Gupta (narindergupta) wrote : Re: [Bug 1871388] Re: change in podspec causing removing resource and juju units

Ian,
I can confirm that this issue not seen with 2.7.6 candidate and all new
units comes fine with same unit number and persistent volume just get
reattached.

Thanks and Regards,
Narinder Gupta
Canonical, Ltd.
+1.281.736.5150

Ubuntu- Linux for human beings | www.ubuntu.com | www.canonical.com

On Mon, Apr 13, 2020 at 10:50 PM Ian Booth <email address hidden> wrote:

> When a charm updates the pod spec, that flows through to the StatefulSet
> used to manage the workload pods. Updating a StatefulSet's
> PodSpecTemplate will cause k8s to do a rolling update on the replicaset,
> and thus each pod will be stopped and started. But each pod's PV is
> reattached.
>
> There was a bug in Juju where this would result in duplicate units
> appearing in the model. It may be the cause of your problem.
>
> Are you able to try again with either the 2.7.6 candidate snap or 2.8
> edge snap?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1871388
>
> Title:
> change in podspec causing removing resource and juju units
>
> Status in juju:
> New
>
> Bug description:
> hi,
> I am working on k8s charm for zookeeper and Kafka stuck into a problem
> where a change in pod spec causing pods to stop and start which end up
> deleting a juju unit, resources including persistent volume and creating a
> new one. As I am using persistent volume so any persistent volume also gets
> deleted and added again when pod stops and start.
>
> to reproduce the problem.
> deploy zookeeper 3 units
> juju deploy cs:~narindergupta/charm-k8s-zookeeper-1 -n3
> wait for units to be ready might take 5-10 minutes for status update
>
> juju status
> Model Controller Cloud/Region Version SLA
> Timestamp
> look microk8s-localhost microk8s/localhost 2.7.5 unsupported
> 17:15:45Z
>
> App Version Status Scale Charm
> Store Rev OS Address Notes
> zookeeper-k8s rocks.canonical.com:443/k8s... active 3
> charm-k8s-zookeeper jujucharms 1 kubernetes 10.152.183.137
>
> Unit Workload Agent Address Ports
> Message
> zookeeper-k8s/0 maintenance idle 10.1.31.14
> 2888/TCP,2181/TCP,3888/TCP config changing
> zookeeper-k8s/1 active idle 10.1.31.15
> 2888/TCP,2181/TCP,3888/TCP ready Not a Leader
> zookeeper-k8s/2* active idle 10.1.31.16
> 2888/TCP,2181/TCP,3888/TCP ready
>
>
> enable the ha-mode
> juju config zookeeper-k8s ha-mode=true
> above command cause a pod spec change and which exhibits the behavior
>
> juju status
> Model Controller Cloud/Region Version SLA
> Timestamp
> look microk8s-localhost microk8s/localhost 2.7.5 unsupported
> 17:24:48Z
>
> App Version Status Scale Charm
> Store Rev OS Address Notes
> zookeeper-k8s rocks.canonical.com:443/k8s... active 3
> charm-k8s-zookeeper jujucharms 1 kubernetes 10.152.183.137
>
> Unit Workload Agent Address Ports
> Message
> zookeeper-k8s/3* active idle 10.1.31.17
> 2888/TCP,2181/TCP,3888/TCP ready
> zookeeper-k8s/4 active idle 10.1.31.18
> 2888/TCP,2181/TCP,3888/TCP ready Not a Leader
> zookeeper-k8s/5 active idle 10.1.31.19
> 2888/TCP,2181/TCP,3888/TCP ready Not a Leader
>
>
>
> Thanks and Regards,
> Narinder Gupta
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1871388/+subscriptions
>