I somewhat reproduced this. Deployed this bundle:
## Variables loglevel: &loglevel 1 source: &source distro num_mon_units: &num_mon_units 3
series: jammy applications: ceph-mon: charm: ch:ceph-mon channel: quincy/edge series: jammy num_units: *num_mon_units constraints: mem=2G options: source: *source loglevel: *loglevel monitor-count: *num_mon_units monitor-secret: ... expected-osd-count: 3 ceph-osd: charm: ch:ceph-osd channel: quincy/edge series: jammy num_units: 1 constraints: mem=1G options: source: *source loglevel: *loglevel osd-devices: '' # must be empty string when using juju storage storage: osd-devices: cinder,10G,1 relations: - [ ceph-mon, ceph-osd ]
Set one mon's clock 15 seconds into the future, and got the respective:
clock skew detected on mon.juju-1e1b6a-lp2055143-14
The initial memory condition was:
ubuntu@fandanbango-bastion:~$ juju run -a ceph-mon -- free -h - Stdout: |2 total used free shared buff/cache available Mem: 1.9Gi 473Mi 123Mi 4.0Mi 1.3Gi 1.3Gi Swap: 0B 0B 0B UnitId: ceph-mon/6 - Stdout: |2 total used free shared buff/cache available Mem: 1.9Gi 477Mi 130Mi 4.0Mi 1.3Gi 1.3Gi Swap: 0B 0B 0B UnitId: ceph-mon/7 - Stdout: |2 total used free shared buff/cache available Mem: 1.9Gi 501Mi 95Mi 4.0Mi 1.3Gi 1.3Gi Swap: 0B 0B 0B UnitId: ceph-mon/8
Five hours later the situation was:
ubuntu@fandanbango-bastion:~/lp2055143$ juju run -a ceph-mon -- free -h - Stdout: |2 total used free shared buff/cache available Mem: 1.9Gi 554Mi 68Mi 4.0Mi 1.3Gi 1.2Gi Swap: 0B 0B 0B UnitId: ceph-mon/6 - Stdout: |2 total used free shared buff/cache available Mem: 1.9Gi 558Mi 67Mi 4.0Mi 1.3Gi 1.2Gi Swap: 0B 0B 0B UnitId: ceph-mon/7 - Stdout: |2 total used free shared buff/cache available Mem: 1.9Gi 859Mi 64Mi 4.0Mi 1.0Gi 928Mi Swap: 0B 0B 0B UnitId: ceph-mon/8
I've re-set clock synchronization via NTP and will keep monitoring to see if the trend continues or flatlines.
I somewhat reproduced this. Deployed this bundle:
## Variables
loglevel: &loglevel 1
source: &source distro
num_mon_units: &num_mon_units 3
series: jammy count: *num_mon_units secret: ... osd-count: 3
applications:
ceph-mon:
charm: ch:ceph-mon
channel: quincy/edge
series: jammy
num_units: *num_mon_units
constraints: mem=2G
options:
source: *source
loglevel: *loglevel
monitor-
monitor-
expected-
ceph-osd:
charm: ch:ceph-osd
channel: quincy/edge
series: jammy
num_units: 1
constraints: mem=1G
options:
source: *source
loglevel: *loglevel
osd-devices: '' # must be empty string when using juju storage
storage:
osd-devices: cinder,10G,1
relations:
- [ ceph-mon, ceph-osd ]
Set one mon's clock 15 seconds into the future, and got the respective:
clock skew detected on mon.juju- 1e1b6a- lp2055143- 14
The initial memory condition was:
ubuntu@ fandanbango- bastion: ~$ juju run -a ceph-mon -- free -h
total used free shared buff/cache available
total used free shared buff/cache available
total used free shared buff/cache available
- Stdout: |2
Mem: 1.9Gi 473Mi 123Mi 4.0Mi 1.3Gi 1.3Gi
Swap: 0B 0B 0B
UnitId: ceph-mon/6
- Stdout: |2
Mem: 1.9Gi 477Mi 130Mi 4.0Mi 1.3Gi 1.3Gi
Swap: 0B 0B 0B
UnitId: ceph-mon/7
- Stdout: |2
Mem: 1.9Gi 501Mi 95Mi 4.0Mi 1.3Gi 1.3Gi
Swap: 0B 0B 0B
UnitId: ceph-mon/8
Five hours later the situation was:
ubuntu@ fandanbango- bastion: ~/lp2055143$ juju run -a ceph-mon -- free -h
total used free shared buff/cache available
total used free shared buff/cache available
total used free shared buff/cache available
- Stdout: |2
Mem: 1.9Gi 554Mi 68Mi 4.0Mi 1.3Gi 1.2Gi
Swap: 0B 0B 0B
UnitId: ceph-mon/6
- Stdout: |2
Mem: 1.9Gi 558Mi 67Mi 4.0Mi 1.3Gi 1.2Gi
Swap: 0B 0B 0B
UnitId: ceph-mon/7
- Stdout: |2
Mem: 1.9Gi 859Mi 64Mi 4.0Mi 1.0Gi 928Mi
Swap: 0B 0B 0B
UnitId: ceph-mon/8
I've re-set clock synchronization via NTP and will keep monitoring to see if the trend continues or flatlines.