InfluxDB/Grafana not persistent

Bug #1830118 reported by Paul Goins
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
CDK Addons
Fix Released
High
George Kraft
Kubernetes Control Plane Charm
Fix Released
High
George Kraft

Bug Description

It appears that the monitoring-influxdb-grafana-v4 pod is not persisting InfluxDB/Grafana data. The following was extracted from the pod's YAML on a customer's deployment with the irrelevant bits taken out:

spec:
  containers:
  - volumeMounts:
    - mountPath: /data
      name: influxdb-persistent-storage
  - env:
    volumeMounts:
    - mountPath: /var
      name: grafana-persistent-storage
    - mountPath: /etc/grafana
      name: grafana-persistent-config
  volumes:
  - emptyDir: {}
    name: influxdb-persistent-storage
  - emptyDir: {}
    name: grafana-persistent-storage
  - emptyDir: {}
    name: grafana-persistent-config

As configured, if the pod is reset, all data is lost. The customer in question would like for this to be persistent.

Note: This seems like it may be related to https://github.com/kubernetes-retired/heapster/issues/768 to some extent as well.

Desired: the ability to either have this be persistent by default or some way of controlling persistency via the charm.

Tags: sts
tags: added: sts
Changed in charm-kubernetes-master:
status: New → Triaged
importance: Undecided → Medium
Joseph Borg (joeborg)
Changed in charm-kubernetes-master:
assignee: nobody → Joseph Borg (joeborg)
status: Triaged → In Progress
Revision history for this message
Joseph Borg (joeborg) wrote :

Hi Paul,

I've tried to replicate this with bringing up the cluster (1.15 plus steps from https://ubuntu.com/kubernetes/docs/monitoring and juju config grafana install_method=snap), followed by

$ kubectl scale deployment monitoring-influxdb-grafana-v4 --replicas=0 -n kube-system

...wait for removal...

$ kubectl scale deployment monitoring-influxdb-grafana-v4 --replicas=1 -n kube-system

... wait for ready...

This seems to retain the data for me. Can you advise how else to trigger it?

Many thanks,
Joe

Changed in charm-kubernetes-master:
status: In Progress → Incomplete
Revision history for this message
Seyeong Kim (seyeongkim) wrote :

Hello Joseph,

Because it is not persistent volume, they can lose their data in theory.

I tested below code personally and I was able to set proper persistent volume.
but I think you guys need to review it is acceptable or not. ( or I can just make PR..? )

https://pastebin.ubuntu.com/p/3ZBXvrPjrD/

Thanks.

Revision history for this message
Chris Sanders (chris.sanders) wrote :

Subscribing Field High. This issues continues and again the affected site has lost all of it's data.

Joseph Borg (joeborg)
Changed in charm-kubernetes-master:
status: Incomplete → In Progress
importance: Medium → High
Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Per chat with Billy Olsen on Aug 22, I was told to expect a PR from Seyeong Kim for this.

Changed in charm-kubernetes-master:
assignee: Joseph Borg (joeborg) → Seyeong Kim (xtrusia)
Revision history for this message
Seyeong Kim (seyeongkim) wrote :
Changed in charm-kubernetes-master:
milestone: none → 1.16
Changed in cdk-addons:
status: New → In Progress
importance: Undecided → High
milestone: none → 1.16
Changed in cdk-addons:
assignee: nobody → George Kraft (cynerva)
Changed in charm-kubernetes-master:
assignee: Seyeong Kim (xtrusia) → George Kraft (cynerva)
George Kraft (cynerva)
Changed in charm-kubernetes-master:
status: In Progress → Fix Committed
Revision history for this message
George Kraft (cynerva) wrote :
George Kraft (cynerva)
Changed in cdk-addons:
status: In Progress → Won't Fix
status: Won't Fix → Fix Committed
Changed in cdk-addons:
status: Fix Committed → Fix Released
Changed in charm-kubernetes-master:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.