Comment 3 for bug 2071780

Revision history for this message
Nobuto Murata (nobuto) wrote :

[un-patched but some logs added charm from quincy/stable rev. 65]

-> config_changed function is called in every update-status hook.

diff --git a/charm.orig/reactive/ceph_fs.py b/charm/reactive/ceph_fs.py
index 8dc9898..b820b36 100644
--- a/charm.orig/reactive/ceph_fs.py
+++ b/charm/reactive/ceph_fs.py
@@ -68,6 +68,8 @@ def config_changed():
                 subprocess.check_call(['sudo', 'systemctl',
                                        'reset-failed', svc])
                 subprocess.check_call(['sudo', 'systemctl', 'restart', svc])
+ ch_core.hookenv.log(f'⚠️ systemctl restart {svc}',
+ ch_core.hookenv.WARNING)
             except subprocess.CalledProcessError as exc:
                 # The service can be temporarily masked when booting, so
                 # skip that class of errors.

unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer>
tracer: hooks phase, 1 handlers queued
tracer: ++ queue handler reactive/layer_openstack.py:64:default_update_status
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: reactive/layer_openstack.py:64:default_update_status
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer: set flag run-default-update-status
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer: set flag is-update-status-hook
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer>
tracer: main dispatch loop, 7 handlers queued
tracer: ++ queue handler hooks/relations/ceph-mds/requires.py:31:joined:ceph-mds
tracer: ++ queue handler hooks/relations/ceph-mds/requires.py:35:changed:ceph-mds
tracer: ++ queue handler hooks/relations/tls-certificates/requires.py:117:broken:certificates
tracer: ++ queue handler reactive/ceph_fs.py:80:storage_ceph_connected
tracer: ++ queue handler reactive/layer_openstack.py:170:default_config_rendered
tracer: ++ queue handler reactive/layer_openstack.py:82:check_really_is_update_status
tracer: ++ queue handler reactive/layer_openstack.py:93:run_default_update_status
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: reactive/ceph_fs.py:80:storage_ceph_connected
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Request already sent but not complete, not sending new request
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer: cleared flag ceph-mds.pools.available
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Request already sent but not complete, not sending new request
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Request already sent but not complete, not sending new request
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: reactive/layer_openstack.py:82:check_really_is_update_status
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: reactive/layer_openstack.py:93:run_default_update_status
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer>
tracer: cleared flag run-default-update-status
tracer: -- dequeue handler reactive/layer_openstack.py:93:run_default_update_status
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: reactive/layer_openstack.py:170:default_config_rendered
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.update-status enabled
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log service ceph-mds@juju-795de1-0-lxd-0 already enabled
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.update-status active
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: hooks/relations/tls-certificates/requires.py:117:broken:certificates
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: hooks/relations/ceph-mds/requires.py:31:joined:ceph-mds
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: hooks/relations/ceph-mds/requires.py:35:changed:ceph-mds
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log changed broker_req: [{'app-name': 'cephfs', 'compression-algorithm': None, 'compression-max-blob-size': None, 'compression-max-blob-size-hdd': None, 'compression-max-blob-size-ssd': None, 'compression-min-blob-size': None, 'compression-min-blob-size-hdd': None, 'compression-min-blob-size-ssd': None, 'compression-mode': None, 'compression-required-ratio': None, 'crush-profile': None, 'group': None, 'group-namespace': None, 'max-bytes': None, 'max-objects': None, 'name': 'ceph-fs_data', 'op': 'create-pool', 'pg_num': None, 'rbd-mirroring-mode': 'pool', 'replicas': 3, 'weight': 4.0}, {'app-name': 'cephfs', 'compression-algorithm': None, 'compression-max-blob-size': None, 'compression-max-blob-size-hdd': None, 'compression-max-blob-size-ssd': None, 'compression-min-blob-size': None, 'compression-min-blob-size-hdd': None, 'compression-min-blob-size-ssd': None, 'compression-mode': None, 'compression-required-ratio': None, 'crush-profile': None, 'group': None, 'group-namespace': None, 'max-bytes': None, 'max-objects': None, 'name': 'ceph-fs_metadata', 'op': 'create-pool', 'pg_num': None, 'rbd-mirroring-mode': 'pool', 'replicas': 3, 'weight': 1.0}, {'data_pool': 'ceph-fs_data', 'extra_pools': [], 'mds_name': 'ceph-fs', 'metadata_pool': 'ceph-fs_metadata', 'op': 'create-cephfs'}]
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Setting ceph-client.pools.available
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log tracer>
tracer: set flag ceph-mds.pools.available
tracer: ++ queue handler reactive/ceph_fs.py:42:config_changed
unit-ceph-fs-0: 11:57:15 INFO unit.ceph-fs/0.juju-log Invoking reactive handler: reactive/ceph_fs.py:42:config_changed
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.update-status creating /var/lib/ceph/mds/ceph-juju-795de1-0-lxd-0/keyring
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.update-status added entity mds.juju-795de1-0-lxd-0 auth(key=AQAwZqdmsQVCBxAAq4AI6aPhqFoe8K9/XwcaMQ==)
unit-ceph-fs-0: 11:57:15 DEBUG unit.ceph-fs/0.juju-log Changing permissions on existing content: 33184 -> 416
unit-ceph-fs-0: 11:57:22 WARNING unit.ceph-fs/0.juju-log ⚠️ systemctl restart <email address hidden>
unit-ceph-fs-0: 11:57:22 DEBUG unit.ceph-fs/0.juju-log Running _assess_status()
unit-ceph-fs-0: 11:57:22 DEBUG unit.ceph-fs/0.update-status active
unit-ceph-fs-0: 11:57:22 DEBUG unit.ceph-fs/0.juju-log tracer>
tracer: cleared flag is-update-status-hook
tracer: ++ queue handler hooks/relations/ceph-mds/requires.py:31:joined:ceph-mds
tracer: ++ queue handler hooks/relations/ceph-mds/requires.py:35:changed:ceph-mds
tracer: ++ queue handler hooks/relations/tls-certificates/requires.py:117:broken:certificates
tracer: ++ queue handler reactive/ceph_fs.py:42:config_changed
tracer: ++ queue handler reactive/ceph_fs.py:80:storage_ceph_connected
tracer: ++ queue handler reactive/layer_openstack.py:170:default_config_rendered
unit-ceph-fs-0: 11:57:22 INFO juju.worker.uniter.operation ran "update-status" hook (via explicit, bespoke hook script)