Juju Persistent Volume Claim retention inconsistent with K8s's when not using --destroy-storage flag

Bug #1995466 reported by Mehdi B.
34
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Harry Pidcock

Bug Description

-----
*Bug description:*

We noticed that the behavior of juju's management of persistent volume claims is inconsistent with the one of K8s's.

When scaling down, either using: "juju scale-application" or "juju remove-unit" (without having set the --destroy-storage flag), PVCs and PVs are retained as expected.
However, as soon as a new scale up event occurs (either using: "juju scale-application" or "juju add-unit") the previously retained Persistent Volume Claim gets immediately terminated.

Thus, preventing the re-attachment of the said PVC to the new unit as per K8s default behavior with StatefulSets.

-----
*To Reproduce:*

- juju deploy mysql -n 3
- juju scale-application mysql 2 (or: juju remove unit mysql --num-units 1)
- (wait a few seconds or minutes)
- juju scale application mysql 3 (or: juju add-unit mysql --num-units 1)

-----
*Environment:*

- Juju: 2.9.35
- MicroK8s: 1.21.13

-----
*Relevant Attachments:*

Please find attached the screenshots with timestamps detailing this behavior: https://imgur.com/a/FrBiME3

-----
*Additional Context:*

It seems that if this bug is fixed, this would fix the issue detailed here: https://discourse.charmhub.io/t/feature-request-allow-reuse-of-detached-persistent-volumes-by-new-units-in-juju-k8s/6849

-----
Thank you,

Revision history for this message
Thomas Miller (tlmiller) wrote :

Thanks for the bug information. I will take a look at this.

Changed in juju:
importance: Undecided → High
assignee: nobody → Thomas Miller (tlmiller)
milestone: none → 2.9.37
Changed in juju:
milestone: 2.9.37 → 2.9.38
Ian Booth (wallyworld)
Changed in juju:
status: New → In Progress
Revision history for this message
Thomas Miller (tlmiller) wrote :

Just an update on this bug. I have had a look and there does seem to be inconsistency with how Juju handles this and and Kubernetes. We should probably not be interjecting into the operations of how the statefulset controller in k8s works.

Will start working on a fix for this after the tear down issue lands.

Changed in juju:
status: In Progress → Triaged
Changed in juju:
milestone: 2.9.38 → 2.9.39
Changed in juju:
milestone: 2.9.39 → 2.9.40
Changed in juju:
milestone: 2.9.40 → 2.9.41
Changed in juju:
milestone: 2.9.41 → 2.9.42
Changed in juju:
milestone: 2.9.42 → 2.9.43
Revision history for this message
Alex Lutay (taurus) wrote :

Dear Juju team,

JFYI, after testing 2.9.43 tear down PoC, this is the most visible issue for mysql-k8s and postgresql-k8s charms to scale-up/down DB.

Looking forward for the fix, happy to help with PoC testing.

Revision history for this message
Alex Lutay (taurus) wrote :

Just to share with teammates:

Thank-you Alex,

Earlier this week I was able to reproduce and resolve the storage issue with your helpful report. This should land with the sidecar teardown fix sometime next week and be available in 2.9/edge.

I've also added a roadmap item for the next cycle to allow us to optionally reuse storage by reattaching (e.g. something like deploy pg, remove pg without --destroy-storage, deploy pg again with --attach-storage old-pg-storage). Would be valuable to discuss this next week with you.

Kind regards,
Harry Pidcock

Revision history for this message
Harry Pidcock (hpidcock) wrote :
Changed in juju:
assignee: Thomas Miller (tlmiller) → Harry Pidcock (hpidcock)
status: Triaged → In Progress
Revision history for this message
Alex Lutay (taurus) wrote :

Dear Harry, re:

> I've also added a roadmap item for the next cycle to allow us to optionally reuse storage by reattaching (e.g. something like deploy pg, remove pg without --destroy-storage, deploy pg again with --attach-storage old-pg-storage).

Can you please share the ticket number for the internal referencing. Tnx!

P.S The lack of `juju deploy postgresql-k8s --attach-storage <old-pg-storage>` is the next highly visible miss of the Juju K8s functionality.

Changed in juju:
milestone: 2.9.43 → 2.9.44
Harry Pidcock (hpidcock)
Changed in juju:
status: In Progress → Fix Committed
milestone: 2.9.44 → 2.9.43
Changed in juju:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.