Comment 1 for bug 1915280

Revision history for this message
David Coronel (davecore) wrote : Re: docker login failed when relating to docker-registry with http-host configured

After investigation, I think there's not really any real problem with the docker registry. I think the problem is just that the charm is not doing a status-set after things are OK again. When I have the wrong IP (oam private IP), it did a status-set:

unit-docker-0: 17:02:19 INFO unit.docker/0.juju-log docker-registry:105: Logging into docker registry: 172.31.222.132:5000.
unit-docker-0: 17:02:49 INFO unit.docker/0.juju-log docker-registry:105: Invoking reactive handler: reactive/docker.py:679:docker_restart
unit-docker-0: 17:02:49 WARNING unit.docker/0.juju-log docker-registry:105: Passing NO_PROXY string that includes a cidr. This may not be compatible with software you are running in your shell.
unit-docker-0: 17:03:19 INFO unit.docker/0.juju-log docker-registry:105: Restarting docker service.
unit-docker-0: 17:03:19 INFO unit.docker/0.juju-log docker-registry:105: Setting runtime to {'apt': ['docker.io'], 'upstream': ['docker-ce'], 'nvidia': ['docker-ce', 'nvidia-docker2', 'nvidia-container-runtime', 'nvidia-container-runtime-hook']}
unit-docker-0: 17:03:19 INFO unit.docker/0.juju-log docker-registry:105: Reloading system daemons.
unit-docker-0: 17:03:31 WARNING unit.docker/0.docker-registry-relation-changed WARNING: No swap limit support
unit-docker-0: 17:03:31 INFO unit.docker/0.juju-log docker-registry:105: status-set: blocked: docker login failed, see juju debug-log

I think we just never re-set status, which would also explain why my docker subs all stay in blocked state after I remove the docker-registry relation.

To push the investigation, I run “juju debug-hooks -m kubernetes docker/0” and then run “juju run -m kubernetes --unit docker/0 'hooks/update-status'” in another window which triggers the update-status hook in my debug-hook session

Then I run “charms.reactive -p get_flags” and see the following:

['docker.available',
 'docker.ready',
 'docker.registry.configured',
 'endpoint.docker-registry.changed.basic_password',
 'endpoint.docker-registry.changed.basic_user',
 'endpoint.docker-registry.changed.egress-subnets',
 'endpoint.docker-registry.changed.ingress-address',
 'endpoint.docker-registry.changed.private-address',
 'endpoint.docker-registry.changed.registry_netloc',
 'endpoint.docker-registry.changed.registry_url',
 'endpoint.docker-registry.changed.tls_ca',
 'endpoint.docker-registry.joined',
 'endpoint.docker-registry.ready',
 'endpoint.docker.available',
 'endpoint.docker.changed',
 'endpoint.docker.changed.egress-subnets',
 'endpoint.docker.changed.ingress-address',
 'endpoint.docker.changed.private-address',
 'endpoint.docker.changed.sandbox_image',
 'endpoint.docker.joined',
 'endpoint.docker.reconfigure']

The only time in the docker charm code that we set status.active('Container runtime available.') and set_state('docker.available') are set is in the signal_workloads_start function:

@when('docker.ready')
@when_not('docker.available')
def signal_workloads_start():

But right now in my debug-hook in update-status I have docker.available set, so this won’t run.

Am I understanding this right?