2020-03-21 14:50:19 |
Frode Nordahl |
bug |
|
|
added bug |
2020-03-21 14:50:43 |
Frode Nordahl |
summary |
Intermittent deploy failure |
[Ussuri] Intermittent deploy failure |
|
2020-03-21 15:09:30 |
Frode Nordahl |
description |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'. |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g== |
|
2020-03-22 08:52:32 |
Frode Nordahl |
summary |
[Ussuri] Intermittent deploy failure |
Intermittent deploy failure |
|
2020-03-22 08:55:35 |
Frode Nordahl |
description |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g== |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
The charm-ceph-radosgw side processing of the mon relation is gated on presence of a completed broker request, and this will only be complete on one of the unit relations, so I suggest we make the client process all unit relations on each run of the mon-relation-changed hook. |
|
2020-03-22 08:55:41 |
Frode Nordahl |
charm-ceph-radosgw: status |
New |
Triaged |
|
2020-03-22 08:55:44 |
Frode Nordahl |
charm-ceph-radosgw: importance |
Undecided |
High |
|
2020-03-23 06:59:44 |
Frode Nordahl |
summary |
Intermittent deploy failure |
Intermittent deploy failure with certificates relation |
|
2020-03-23 10:04:26 |
Frode Nordahl |
charm-ceph-radosgw: assignee |
|
Frode Nordahl (fnordahl) |
|
2020-03-23 10:04:30 |
Frode Nordahl |
charm-ceph-radosgw: milestone |
|
20.05 |
|
2020-03-23 10:17:13 |
Frode Nordahl |
description |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
The charm-ceph-radosgw side processing of the mon relation is gated on presence of a completed broker request, and this will only be complete on one of the unit relations, so I suggest we make the client process all unit relations on each run of the mon-relation-changed hook. |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g== |
|
2020-03-23 10:29:11 |
Frode Nordahl |
summary |
Intermittent deploy failure with certificates relation |
Intermittent deploy failure |
|
2020-03-23 11:29:41 |
Frode Nordahl |
description |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g== |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
Note that the 'broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests:' is because of the number of PGs requested being too large to fit on the OSD topology in the test and there appear to be a new check for this in Octopus.
It may be that we in addition to fixing the TLS specific errors from Apache during deployment need to adjust the PG calculation, either adjust what the radosgw charm requests or how the ceph-mon calculates the resulting request to Ceph. |
|
2020-04-14 06:01:31 |
Frode Nordahl |
description |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
Note that the 'broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests:' is because of the number of PGs requested being too large to fit on the OSD topology in the test and there appear to be a new check for this in Octopus.
It may be that we in addition to fixing the TLS specific errors from Apache during deployment need to adjust the PG calculation, either adjust what the radosgw charm requests or how the ceph-mon calculates the resulting request to Ceph. |
$ juju status ceph-radosgw --relations
Model Controller Cloud/Region Version SLA Timestamp
zaza-fc6306dea031 fnordahl-serverstack serverstack/serverstack 2.7.4 unsupported 14:41:04Z
App Version Status Scale Charm Store Rev OS Notes
ceph-radosgw 15.1.0 blocked 1 ceph-radosgw jujucharms 356 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/0* blocked idle 6 10.5.0.3 80/tcp Services not running that should be: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6
Machine State DNS Inst id Series AZ Message
6 started 10.5.0.3 4bb9dfd8-17ac-49a3-a322-8f790444ecd2 bionic nova ACTIVE
Relation provider Requirer Interface Type Message
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
keystone:identity-service ceph-radosgw:identity-service keystone regular
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/haproxy/haproxy.cfg
2020-03-21 11:26:53 INFO juju-log identity-service:37: Registered config file: /etc/ceph/ceph.conf
2020-03-21 11:26:55 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:26:55 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:26:57 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:00 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:00 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:01 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:01 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:01 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:03 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:03 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:03 DEBUG juju-log identity-service:37: Ensuring haproxy enabled in /etc/default/haproxy.
2020-03-21 11:27:04 INFO juju-log identity-service:37: HAProxy context is incomplete, this unit has no peers.
2020-03-21 11:27:04 INFO juju-log identity-service:37: Loaded template from /var/lib/juju/agents/unit-ceph-radosgw-0/charm/hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Rendering from template: /etc/haproxy/haproxy.cfg
2020-03-21 11:27:04 INFO juju-log identity-service:37: Wrote template /etc/haproxy/haproxy.cfg.
2020-03-21 11:27:04 DEBUG juju-log identity-service:37: Generating template context for identity-service
2020-03-21 11:27:06 INFO juju-log identity-service:37: Loaded template from templates/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Rendering from template: /etc/ceph/ceph.conf
2020-03-21 11:27:06 INFO juju-log identity-service:37: Wrote template /etc/ceph/ceph.conf.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed ERROR: Site openstack_https_frontend does not exist!
2020-03-21 11:27:06 DEBUG identity-service-relation-changed apache2.service is not active, cannot reload.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed Job for apache2.service failed because the control process exited with error code.
2020-03-21 11:27:06 DEBUG identity-service-relation-changed See "systemctl status apache2.service" and "journalctl -xe" for details.
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 DEBUG identity-service-relation-changed active
2020-03-21 11:27:07 INFO juju-log identity-service:37: Unit is ready
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:06 UTC; 3h 15min ago
Process: 6895 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
Process: 14108 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 5564 (code=exited, status=1/FAILURE)
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Starting The Apache HTTP Server...
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: no listening sockets available, shutting down
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: AH00015: Unable to open logs
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: Action 'start' failed.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 apachectl[14108]: The Apache error log may have more information.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Control process exited, code=exited status=1
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: apache2.service: Failed with result 'exit-code'.
Mar 21 11:27:06 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start The Apache HTTP Server.
# netstat -nepa |grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 38964 7263/haproxy
tcp 0 0 252.0.3.1:53 0.0.0.0:* LISTEN 0 23121 2374/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15544 611/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 18826 910/sshd
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 0 38962 7263/haproxy
tcp6 0 0 :::80 :::* LISTEN 0 38965 7263/haproxy
tcp6 0 0 :::22 :::* LISTEN 0 18837 910/sshd
# systemctl status ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service
● ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-03-21 11:27:07 UTC; 3h 17min ago
Process: 14228 ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.rgw.juju-ddb957-zaza-fc6306dea031-6 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 14228 (code=exited, status=1/FAILURE)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Stopped Ceph rados gateway.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: Failed to start Ceph rados gateway.
# journalctl -b |grep radosgw
[ ... ]
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.610+0000 7f2016e2c980 -1 AuthRegistry(0x5600fa991198) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 auth: unable to find a keyring on /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring: (2) No such file or directory
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: 2020-03-21T11:27:07.618+0000 7f2016e2c980 -1 AuthRegistry(0x7fffa48ac2d0) no keyring found at /var/lib/ceph/radosgw/ceph-rgw.juju-ddb957-zaza-fc6306dea031-6/keyring, disabling cephx
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 radosgw[14228]: failed to fetch mon config (--no-mon-config to skip)
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Main process exited, code=exited, status=1/FAILURE
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Service hold-off time over, scheduling restart.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Scheduled restart job, restart counter is at 5.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Start request repeated too quickly.
Mar 21 11:27:07 juju-ddb957-zaza-fc6306dea031-6 systemd[1]: ceph-radosgw@rgw.juju-ddb957-zaza-fc6306dea031-6.service: Failed with result 'exit-code'.
The ceph-radosgw charm appear to never pick up the broker request response from ceph-mon:
2020-03-21 11:25:14 DEBUG juju-log mon:36: Request already sent but not complete, not sending new request
The response is only present on one of the unit to unit relations, but that may or may not be ok:
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/0'
auth: cephx
ceph-public-address: 10.5.0.38
egress-subnets: 10.5.0.38/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.38
private-address: 10.5.0.38
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/1'
auth: cephx
broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests: {''api-version'': 1, ''ops'': [{''op'': ''create-pool'',
''name'': ''default.rgw.buckets.data'', ''replicas'': 3, ''pg_num'': None, ''weight'':
20, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.control'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.data.root'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.gc'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.log'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.intent-log'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.meta'', ''replicas'': 3, ''pg_num'': None, ''weight'': 0.1,
''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.usage'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.keys'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.users.email'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.users.swift'',
''replicas'': 3, ''pg_num'': None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''default.rgw.users.uid'', ''replicas'': 3, ''pg_num'':
None, ''weight'': 0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'':
''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'': ''create-pool'',
''name'': ''default.rgw.buckets.extra'', ''replicas'': 3, ''pg_num'': None, ''weight'':
1.0, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}, {''op'': ''create-pool'', ''name'': ''default.rgw.buckets.index'',
''replicas'': 3, ''pg_num'': None, ''weight'': 3.0, ''group'': ''objects'', ''group-namespace'':
None, ''app-name'': ''rgw'', ''max-bytes'': None, ''max-objects'': None}, {''op'':
''create-pool'', ''name'': ''.rgw.root'', ''replicas'': 3, ''pg_num'': None, ''weight'':
0.1, ''group'': ''objects'', ''group-namespace'': None, ''app-name'': ''rgw'', ''max-bytes'':
None, ''max-objects'': None}], ''request-id'': ''f41b0e16-6b65-11ea-a7e5-fa163e452a2c''}"}'
ceph-public-address: 10.5.0.18
egress-subnets: 10.5.0.18/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.18
private-address: 10.5.0.18
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
ubuntu@test:~$ juju run --unit ceph-radosgw/0 'relation-get -r mon:36 - ceph-mon/2'
auth: cephx
ceph-public-address: 10.5.0.4
egress-subnets: 10.5.0.4/32
fsid: f82f86bc-6b65-11ea-bf83-fa163e6453d2
ingress-address: 10.5.0.4
private-address: 10.5.0.4
rgw.juju-ddb957-zaza-fc6306dea031-6_key: AQAf+XVeUildAxAARIUZUmzoh4/zIfwBTQ5m1g==
Note that the 'broker-rsp-ceph-radosgw-0: '{"exit-code": 1, "stderr": "Unexpected error occurred
while processing requests:' was caused by a bug in the Ceph Octopus PG autoscaling code re bug 1868587 |
|
2020-05-21 20:17:09 |
David Ames |
charm-ceph-radosgw: milestone |
20.05 |
20.08 |
|
2020-05-28 12:39:11 |
OpenStack Infra |
charm-ceph-radosgw: status |
In Progress |
Fix Committed |
|
2020-06-01 17:48:20 |
David Coronel |
bug |
|
|
added subscriber David Coronel |
2020-06-16 12:13:43 |
Nobuto Murata |
bug |
|
|
added subscriber Nobuto Murata |
2020-08-14 15:39:08 |
Alex Kavanagh |
charm-ceph-radosgw: status |
Fix Committed |
Fix Released |
|