InfluxDB/Grafana not persistent
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
CDK Addons |
Fix Released
|
High
|
George Kraft | ||
Kubernetes Control Plane Charm |
Fix Released
|
High
|
George Kraft |
Bug Description
It appears that the monitoring-
spec:
containers:
- volumeMounts:
- mountPath: /data
name: influxdb-
- env:
volumeMounts:
- mountPath: /var
name: grafana-
- mountPath: /etc/grafana
name: grafana-
volumes:
- emptyDir: {}
name: influxdb-
- emptyDir: {}
name: grafana-
- emptyDir: {}
name: grafana-
As configured, if the pod is reset, all data is lost. The customer in question would like for this to be persistent.
Note: This seems like it may be related to https:/
Desired: the ability to either have this be persistent by default or some way of controlling persistency via the charm.
tags: | added: sts |
Changed in charm-kubernetes-master: | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in charm-kubernetes-master: | |
assignee: | nobody → Joseph Borg (joeborg) |
status: | Triaged → In Progress |
Changed in charm-kubernetes-master: | |
status: | Incomplete → In Progress |
importance: | Medium → High |
Changed in charm-kubernetes-master: | |
milestone: | none → 1.16 |
Changed in cdk-addons: | |
status: | New → In Progress |
importance: | Undecided → High |
milestone: | none → 1.16 |
Changed in cdk-addons: | |
assignee: | nobody → George Kraft (cynerva) |
Changed in charm-kubernetes-master: | |
assignee: | Seyeong Kim (xtrusia) → George Kraft (cynerva) |
Changed in charm-kubernetes-master: | |
status: | In Progress → Fix Committed |
Changed in cdk-addons: | |
status: | In Progress → Won't Fix |
status: | Won't Fix → Fix Committed |
Changed in cdk-addons: | |
status: | Fix Committed → Fix Released |
Changed in charm-kubernetes-master: | |
status: | Fix Committed → Fix Released |
Hi Paul,
I've tried to replicate this with bringing up the cluster (1.15 plus steps from https:/ /ubuntu. com/kubernetes/ docs/monitoring and juju config grafana install_ method= snap), followed by
$ kubectl scale deployment monitoring- influxdb- grafana- v4 --replicas=0 -n kube-system
...wait for removal...
$ kubectl scale deployment monitoring- influxdb- grafana- v4 --replicas=1 -n kube-system
... wait for ready...
This seems to retain the data for me. Can you advise how else to trigger it?
Many thanks,
Joe