Initialization of realm, zonegroup, zones etc. may also be necessary in a single-site scenario

Bug #1941715 reported by Frode Nordahl
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph RADOS Gateway Charm
New
Undecided
Unassigned

Bug Description

In a focal-ussuri deploy using -next charms with a single ceph-radosgw unit I got into a situation where neither realm, zonegroup or any zones existed.

The charm currently assumes that realm and zonegroup will always exist in a single-site scenario and as such it was not able to recover.

It may be that we need to move the logic in the master-relation-joined hook into the regular mon-relation-changed hook so that the charm can deal with the cluster regardless of its state.

Revision history for this message
Frode Nordahl (fnordahl) wrote :
Download full text (80.3 KiB)

Unit log:
2021-08-26 05:47:26 INFO juju unit_agent.go:253 Starting unit workers for "ceph-radosgw/0"
2021-08-26 05:47:26 INFO juju.worker.apicaller connect.go:158 [bb06d3] "unit-ceph-radosgw-0" successfully connected to "10.246.115.255:17070"
2021-08-26 05:47:26 INFO juju.worker.apicaller connect.go:255 [bb06d3] password changed for "unit-ceph-radosgw-0"
2021-08-26 05:47:27 INFO juju.worker.apicaller connect.go:158 [bb06d3] "unit-ceph-radosgw-0" successfully connected to "10.246.115.255:17070"
2021-08-26 05:47:27 INFO juju.worker.migrationminion worker.go:140 migration phase is now: NONE
2021-08-26 05:47:27 INFO juju.worker.logger logger.go:120 logger worker started
2021-08-26 05:47:27 INFO juju.worker.upgrader upgrader.go:219 no waiter, upgrader is done
2021-08-26 05:47:27 INFO juju.worker.meterstatus runner.go:89 skipped "meter-status-changed" hook (missing)
2021-08-26 05:47:27 INFO juju.worker.uniter uniter.go:328 unit "ceph-radosgw/0" started
2021-08-26 05:47:27 INFO juju.worker.uniter uniter.go:629 resuming charm install
2021-08-26 05:47:27 INFO juju.worker.uniter.charm bundles.go:79 downloading cs:~openstack-charmers-next/ceph-radosgw-407 from API server
2021-08-26 05:47:27 INFO juju.worker.uniter uniter.go:346 hooks are retried true
2021-08-26 05:47:27 INFO juju.worker.uniter.storage resolver.go:127 initial storage attachments ready
2021-08-26 05:47:27 INFO juju.worker.uniter resolver.go:148 found queued "install" hook
2021-08-26 05:47:38 INFO unit.ceph-radosgw/0.juju-log server.go:314 Installing python3-psutil with options: ['--option=Dpkg::Options::=--force-confold']
2021-08-26 05:47:42 INFO unit.ceph-radosgw/0.juju-log server.go:314 Registered config file: /etc/haproxy/haproxy.cfg
2021-08-26 05:47:42 INFO unit.ceph-radosgw/0.juju-log server.go:314 Registered config file: /etc/ceph/ceph.conf
2021-08-26 05:47:44 INFO unit.ceph-radosgw/0.juju-log server.go:314 Installing Apache
2021-08-26 05:47:44 INFO unit.ceph-radosgw/0.juju-log server.go:314 Installing ['apache2'] with options: ['--option=Dpkg::Options::=--force-confold']
2021-08-26 05:47:54 INFO unit.ceph-radosgw/0.juju-log server.go:314 Disabling unused Apache sites
2021-08-26 05:47:54 INFO unit.ceph-radosgw/0.juju-log server.go:314 Restarting Apache
2021-08-26 05:47:54 WARNING unit.ceph-radosgw/0.install logger.go:60 Job for apache2.service failed because the control process exited with error code.
2021-08-26 05:47:54 WARNING unit.ceph-radosgw/0.install logger.go:60 See "systemctl status apache2.service" and "journalctl -xe" for details.
2021-08-26 05:47:54 INFO unit.ceph-radosgw/0.juju-log server.go:314 Installing ['haproxy', 'radosgw', 'apache2'] with options: ['--option=Dpkg::Options::=--force-confold']
2021-08-26 05:48:12 WARNING unit.ceph-radosgw/0.juju-log server.go:314 Package libapache2-mod-fastcgi has no installation candidate.
2021-08-26 05:48:12 WARNING unit.ceph-radosgw/0.install logger.go:60 radosgw.service is not a native service, redirecting to systemd-sysv-install.
2021-08-26 05:48:12 WARNING unit.ceph-radosgw/0.install logger.go:60 Executing: /lib/systemd/systemd-sysv-install disable radosgw
2021-08-26 05:48:13 WARNING unit.ceph-radosgw/0.install logger.go:60 Cre...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.