leftover ReplicaSets when using scalePolicy: serial on k8s charm
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Released
|
High
|
Yang Kelvin Liu |
Bug Description
When upgrading a charm with files and a scalePolicy of "serial" in a way
that causes a new unit to be started, Juju will leave behind an unused
ReplicaSet with replicas: 0.
$ cat config/podspec.yaml
version: 2
service:
containers:
- name: foo
ports:
- containerPort: 80
files:
- name: foo
$ juju deploy .
Deploying charm "local:
$ kubectl -n traefik get all
NAME READY STATUS RESTARTS AGE
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
NAME READY UP-TO-DATE AVAILABLE AGE
NAME DESIRED CURRENT READY AGE
NAME READY AGE
If you make a change to the podspec so that a new unit is deployed (I
just changed the name and mountPath of the file, which seems to work
reliably), then uprade the charm and wait a moment, you get one new
ReplicaSet and one left over. Note this doesn't occur if the worker is
updated in-place (i.e. if it doesn't start a new unit), I guess because
in that case the scalePolicy doesn't come into play.
$ juju upgrade-charm --path . traefik
Added charm "local:
$ kubectl -n traefik get rs
NAME DESIRED CURRENT READY AGE
Do this a few more times and you get a new one for each unit:
$ kubectl -n traefik get rs
NAME DESIRED CURRENT READY AGE
These old replicasets have replicas set to "0" (rather than being
deleted as we would expect).
$ kubectl -n traefik get replicaset.
# https:/
Thanks for looking, and sorry for the delay getting these filed.
Evan
Changed in juju: | |
assignee: | nobody → Ian Booth (wallyworld) |
status: | New → Triaged |
assignee: | Ian Booth (wallyworld) → nobody |
importance: | Undecided → High |
Changed in juju: | |
status: | In Progress → Fix Committed |
Changed in juju: | |
status: | Fix Committed → Fix Released |
https:/ /github. com/juju/ juju/pull/ 11269 will land to 2.8 to fix it;