Unable to find implementation for relation: peers of vault-ha

Bug #1845933 reported by Vuk Vasic
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Vault KV Charm Layer
Incomplete
Undecided
Unassigned

Bug Description

When installing cluster with HA Vault of 3 instances and Percona cluster of 3 instances i am getting following error in log in a loop:

ERROR unit.vault/1.juju-log certificates:11: Unable to find implementation for relation: peers of vault-ha

Cluster installation does not continues afterwards.

Revision history for this message
Cory Johns (johnsca) wrote :

The Vault charm defines a peer relation in its metadata.yaml but never uses it. Presumably, this was either an oversight for put in place for some future use-case that has not been implemented.

The "Unable to find implementation for relation" message is just warning that the Vault charm author may have forgotten to add 'interface:vault-ha' to the layer.yaml's includes section and that the charm may be expecting functionality provided by that layer which isn't available. However, since as far as I can tell, the vault-ha interface layer simply does not exist, and there is no code in the Vault charm which would use it even if it did, this message is just spurious.

I think it is, in fact, a red herring for the real issue you're encountering. Is there any chance you could provide a more complete set of debugging logs, such as via juju-crashdump (snap install juju-crashdump)?

Revision history for this message
Cory Johns (johnsca) wrote :
Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Yeah, a juju-crashdump tarball would help. Here's how to get one:

sudo snap install juju-crashdump --channel edge
juju-crashdump -a debug-layer -a config

Changed in charm-layer-vault-kv:
status: New → Incomplete
Revision history for this message
Vuk Vasic (vasicvuk) wrote :

When i run these commands i get following error:

juju-crashdump -a debug-layer -a config
Traceback (most recent call last):
  File "/snap/juju-crashdump/137/bin/juju-crashdump", line 11, in <module>
    load_entry_point('jujucrashdump==0.0.0', 'console_scripts', 'juju-crashdump')()
  File "/snap/juju-crashdump/137/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 542, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/snap/juju-crashdump/137/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2569, in load_entry_point
    return ep.load()
  File "/snap/juju-crashdump/137/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2229, in load
    return self.resolve()
  File "/snap/juju-crashdump/137/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2235, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/snap/juju-crashdump/137/lib/python2.7/site-packages/jujucrashdump/crashdump.py", line 16, in <module>
    import concurrent.futures
ImportError: No module named concurrent.futures

Revision history for this message
Vuk Vasic (vasicvuk) wrote :

I installed python, pip and futures module. The command ouput gave me a lot of crashes but in the end it gave some output tar.xz (it is in attachment)

Revision history for this message
Cory Johns (johnsca) wrote :

I think you're hitting this bug in crashdump: https://github.com/juju/juju-crashdump/pull/46/files

Revision history for this message
Vuk Vasic (vasicvuk) wrote :

Here are the errors i got from juju-crashdump:

command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addon_output" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addon_output" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addon_output" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addon_output" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addon_output" failed
command juju ssh {} "mkdir -p /tmp/f8509c67-373e-4945-8026-922fcfc8914b/addon_output" failed
command juju scp -- -r {}:/tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons failed
command juju scp -- -r {}:/tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons failed
command juju scp -- -r {}:/tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons failed
command juju scp -- -r {}:/tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons failed
command juju scp -- -r {}:/tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons failed
command juju scp -- -r {}:/tmp/f8509c67-373e-4945-8026-922fcfc8914b/addons failed

Revision history for this message
Vuk Vasic (vasicvuk) wrote :

I can reinstall snap when new version with python3 when released. But i am not sure that i will keep this cluster in this state for long time

Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

@Vuk, a new snap is available on the edge channel now.

Revision history for this message
Vuk Vasic (vasicvuk) wrote :
Download full text (25.1 KiB)

Hi Tim,

I installed the new one with Python 3 but the errors are same:

juju-crashdump -a debug-layer -a config
/snap/juju-crashdump/143/lib/python3.5/site-packages/jujucrashdump/addons.py:48: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  addon_specs = yaml.load(addons_file)
juju-crashdump -a debug-layer -a config
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju scp -- -r {}:/tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons failed
command juju ssh {} -- "cd /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons; mkdir /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; . /etc/profile.d/juju-introspection.sh; for app in $(juju_machine_lock | grep -E '^\w' |sed 's/-[0-9]\+:$//g' | sed 's/unit-//g'); do cp $app-config.yaml /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; done;" failed
command juju ssh {} -- "cd /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons; mkdir /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; . /etc/profile.d/juju-introspection.sh; for app in $(juju_machine_lock | grep -E '^\w' |sed 's/-[0-9]\+:$//g' | sed 's/unit-//g'); do cp $app-config.yaml /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; done;" failed
command juju ssh {} -- "cd /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons; mkdir /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; . /etc/profile.d/juju-introspection.sh; for app in $(juju_machine_lock | grep -E '^\w' |sed 's/-[0-9]\+:$//g' | sed 's/unit-//g'); do cp $app-config.yaml /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; done;" failed
command juju ssh {} -- "cd /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons; mkdir /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; . /etc/profile.d/juju-introspection.sh; for app in $(juju_machine_lock | grep -E '^\w' |sed 's/-[0-9]\+:$//g' | sed 's/unit-//g'); do cp $app-config.yaml /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; done;" failed
command juju ssh {} -- "cd /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addons; mkdir /tmp/334932c9-a357-4f9a-9029-5d29caaf4b6a/addon_output/config; . /etc/profile.d/juju-introspection.sh; for app in $(juju_machine_lock | grep -E '^\w' |sed 's/-[0-9]\+...

Revision history for this message
Cory Johns (johnsca) wrote :

The one crashdump that you provided, while missing the logs for the Vault charm, did show the status output from Vault as being "'etcd' incomplete" and the Etcd status being "Missing relation to certificate authority."

This typically means that Vault is blocked on needing an action from you, either doing the init and unseal steps, or providing a CA certificate or having the charm generate a root CA cert for you. If you used the overlay from the documentation[1], then it will automatically generate a root CA for you, which means you simply need to follow the instructions in those same docs to init and unseal the Vault unit(s). For reference, these are the instructions:

juju ssh vault/0
export HISTCONTROL=ignorespace # enable leading space to suppress command history
export VAULT_ADDR='http://localhost:8200'
vault operator init -key-shares=5 -key-threshold=3 # this will give you 5 keys and a root token
  vault operator unseal {key1}
  vault operator unseal {key2}
  vault operator unseal {key3}
  VAULT_TOKEN={root token} vault token create -ttl 10m # this will give you a token to auth the charm
exit
juju run-action vault/0 authorize-charm token={charm token}

Since you are using more than one unit of Vault, you will need to repeat the `vault operator unseal`, `vault token create`, and `authorize-charm` steps for each unit after the first, re-using the unseal keys and root token from the first unit.

[1]: https://ubuntu.com/kubernetes/docs/using-vault

Revision history for this message
Cory Johns (johnsca) wrote :

If you did not use the `auto-generate-root-ca-cert: true` config as suggested in the overlay provided in the docs, you will then need to generate a root CA certificate, which can be done easily with:

juju run-action vault/0 --wait generate-root-ca

Revision history for this message
Vuk Vasic (vasicvuk) wrote :

I used auto-generate-root-ca-cert: true and also i used totally-unsecure-auto-unlock: true. So i guess unseal should also be automatically (Maybe i am wrong but anyone it is not in blocked status as supposed to be) . For some reason Vault service is not installed on the first machine at all so here is the output i get from command you gave me:

k8s@juju-mgmt:~/.local/share/juju/ssh$ juju run-action vault/0 --wait generate-root-ca
unit-vault-0:
  id: 2da21796-6c7a-458a-80e2-aeb62ec4d521
  message: Vault is not ready (Cannot initialise local client)
  status: failed
  timing:
    completed: 2019-09-30 18:44:59 +0000 UTC
    enqueued: 2019-09-30 18:44:54 +0000 UTC
    started: 2019-09-30 18:44:58 +0000 UTC
  unit: vault/0

I tried with different configurations multiple times. Sometimes it just don't run Vault on some of the instances and once it actually showed that it cannot connect to MySQL (Percona cluster) since the host was not allowed to connect.

If you want i can post whole bundle here?

Revision history for this message
Vuk Vasic (vasicvuk) wrote :

See that the issue is still in Incomplete status. What more info can i provide you so that this is fixed?

Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Hi Vuk, we're working on a fix for juju-crashdump so we can get all the debug info that we need. Hope to have a new juju-crashdump snap available soon - will comment here when we do.

Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

Hi Vuk, there's a new juju-crashdump snap available on the edge channel, can you try again using that one?

Revision history for this message
Vuk Vasic (vasicvuk) wrote :

I am still having same errors with juju-crashdump from Edge channel. Seems like it is connecting to {} instead of machine to dump logs

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.