Running "kayobe seed host configure" on a host that already has docker and a registry (in this case the kayobe control host) breaks the existing docker containers

Bug #2051973 reported by Martin Ananda Boeker
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kayobe
New
Undecided
Unassigned

Bug Description

Kayobe 14.1, Ubuntu 22.04, Docker version 25.0.2

Hey guys. So after some playing around, and finding some git commits from Mark Goddard from 8 years ago, I decided to try to see if I can get seed and the kayobe host onto the same system, since it seems to me the only thing seed actually does is run the bifrost container (even in the backup/restore steps for seed only bifrost is mentioned).

I've tried just making seed and kayobehost (my sandbox control host) have the same IP, and it seems to work pretty well, except that the containers running on kayobehost are unreachable after `seed host configure` is run. They are running, but for example the nginx container is no longer reachable. This includes the registry (the same one that `kayobe seed host configure` wants to install is already there). All of the containers do work up until this point:

"RUNNING HANDLER [openstack.kolla.docker : Restart docker] **********************
task path: /home/ubuntu/venvs/kayobe/share/kayobe/ansible/collections/ansible_collections/openstack/kolla/roles/docker/handlers/main.yml

I've been trying to debug exactly what causes it to break but haven't had much luck. I'm sure that the installation isn't the problem, that sums up to just `apt install docker-ce`, so the issue must be at some point in /home/ubuntu/venvs/kayobe/share/kayobe/ansible/collections/ansible_collections/openstack/kolla/roles/docker/tasks/config.yml

I also tried adding kayobehost [bifrost:children] and just running `kayobe seed service deploy` but it definitely doesn't like that, it fails at roles/bifrost/tasks/start.yml saying module docker not found..

Is there a way to get bifrost going on the control host? Does seed serve any other purpose? Can we figure out why the existing docker registry gets trashed by seed host configure?

Thanks!!

Revision history for this message
Will Szumski (willjs) wrote :

We tend to disable the ip tables integration: https://docs.docker.com/network/packet-filtering-firewalls/#prevent-docker-from-manipulating-iptables which can break port forwarding. Do you think that could be the issue here?

Revision history for this message
Will Szumski (willjs) wrote :

I think you have a couple of options:

- Tune these variables: https://github.com/openstack/ansible-collection-kolla/blob/7abde4d37a1c70182db51da707adf7bb6aab0a8c/roles/docker/defaults/main.yml#L40-L42

- Stop kayobe configuring docker by removing the seed host from the docker group: https://github.com/openstack/kayobe/blob/master/etc/kayobe/inventory/groups#L68-L75. This assumes you are deploying docker by some other means.

Revision history for this message
Martin Ananda Boeker (mboeker) wrote (last edit ):

So in the process of trying to get bifrost deployed on the kayobe control host (kayobevm), I've tried this:

in etc/kayobe/inventory/hosts:

[kayobevm]
kayobevm ansible_connection=local

#[seed]
# Add a seed node here if required. This host will provide the Bifrost
# undercloud.
#seed

in etc/kayobe/inventory/groups:

[kayobevm]
# Empty group to provide declaration of kayobevm group.

[bifrost]
# Empty group to provide declaration of bifrost group.

[bifrost:children]
kayobevm

[deployment]
# Empty group to provide declaration of bifrost group.

[deployment:children]
kayobevm

as you can see I've tried a lot.. However the playbook still says this:

PLAY [Apply role bifrost] **************************************************************************************************************************
skipping: no hosts matched

Related to the other ticket I created, where the kolla inventory does not reflect the kayobe inventory, I'm seeing this in etc/kolla/inventory/seed/hosts:

[bifrost:children]
seed

And this in etc/kolla/inventory/overcloud/hosts:

[bifrost:children]
deployment

# Empty group definition for deployment.
[deployment]

(nothing in this group).

Any suggestions?

Revision history for this message
Will Szumski (willjs) wrote :

Have you tried something like:

[seed:children]
# Add a seed node here if required. This host will provide the Bifrost
# undercloud.
kayobevm

?

Revision history for this message
Martin Ananda Boeker (mboeker) wrote (last edit ):

I went with [deployment] containing just kayobevm, and then basically s/seed/deployment/ in groups and s/seed/kayobevm/ in hosts and anywhere else. It was unhappy with the network config, so I also created etc/kayobe/deployment.yml with just the interface. After that it was okay to run `seed host configure` and I got this:

~~~
EDIT: that's not right what I wrote above.. I had a different configuration in my last test.. In my etc/kayobe/inventory/hosts file I had just kayobevm in [seed], and then in groups I just removed seed from [docker-registry:children], and that was it!
~~~

PLAY [Ensure a local Docker registry is deployed] **************************************************************************************************
skipping: no hosts matched

Yay! However then when I do seed service deploy, it does a whole bunch of stuff but then says this:

Kolla virtualenv /home/venvs/kolla-ansible is invalid: Path does not exist

I checked and the KOLLA_VENV_PATH is really just that. I see in kayobe-env it's set to be the grandparent of $KAYOBE_CONFIG_ROOT, but that doesn't make sense, at least not in my setup (my setup has KAYOBE_CONFIG_ROOT as ~/demo/)

(kayobe) ubuntu@kayobevm:~/demo$ source kayobe-env
Using Kayobe config from /home/ubuntu/demo
(kayobe) ubuntu@kayobevm:~/demo$ echo $KAYOBE_CONFIG_PATH
/home/ubuntu/demo/etc/kayobe
(kayobe) ubuntu@kayobevm:~/demo$ echo $KOLLA_VENV_PATH
/home/venvs/kolla-ansible
(kayobe) ubuntu@kayobevm:~/demo$ grep 'base\|VENV' kayobe-env
# These defaults are set based on the following directory structure:
# ${base_path}/
base_path=$(realpath $KAYOBE_CONFIG_ROOT/../../)
export KOLLA_SOURCE_PATH=${base_path}/src/kolla-ansible
export KOLLA_VENV_PATH=${base_path}/venvs/kolla-ansible # Should be called KOLLA_ANIBLE, no?
export KAYOBE_VENV_PATH=${base_path}/venvs/kayobe # Added this just for fun.

I changed kayobe-env to base_path=$(realpath $KAYOBE_CONFIG_ROOT) and then it worked great!

Revision history for this message
Martin Ananda Boeker (mboeker) wrote (last edit ):

This is strange.. I am pretty sure I did not change the configuration (but obviously I must have).

I just did the process fresh, added kayobevm to the [seed] group and removed seed from the [docker-registry:children] group.

This time I'm getting this error:

TASK [bifrost : Starting bifrost deploy container] *************************************************************************************************************************************************************************************
Wednesday 07 February 2024 18:17:50 +0100 (0:00:01.941) 0:00:07.236 ****
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'dbus'
fatal: [kayobevm]: FAILED! => changed=false
  module_stderr: |-
    Traceback (most recent call last):
      File "<stdin>", line 107, in <module>
      File "<stdin>", line 99, in _ansiballz_main
      File "<stdin>", line 47, in invoke_module
      File "/usr/lib/python3.10/runpy.py", line 224, in run_module
        return _run_module_code(code, init_globals, run_name, mod_spec)
      File "/usr/lib/python3.10/runpy.py", line 96, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
        exec(code, run_globals)
      File "/tmp/ansible_kolla_docker_payload_srevftea/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py", line 22, in <module>
      File "/tmp/ansible_kolla_docker_payload_srevftea/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py", line 21, in <module>
      File "/tmp/ansible_kolla_docker_payload_srevftea/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_container_worker.py", line 17, in <module>
      File "/tmp/ansible_kolla_docker_payload_srevftea/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_systemd_worker.py", line 17, in <module>
    ModuleNotFoundError: No module named 'dbus'
  module_stdout: ''
  msg: |-
    MODULE FAILURE
    See stdout/stderr for the exact error
  rc: 1

Earlier the bifrost container started up just fine.

Googling suggests pip installing dbus-python, but that wants other packages as well, and that can't be right because none of that was an issue before..

Revision history for this message
Martin Ananda Boeker (mboeker) wrote :

Update: I ended up commenting out the kolla_venv and kolla_ansible_venv variables in etc/kayobe/kolla.yml and now bifrost is deploying again. I left the seed group (only containing host kayobevm) in [docker:children] just for testing. The containers that are running before kayobe installs docker are not automatically restarted after the install, but they do work if restarted manually.

Revision history for this message
Martin Ananda Boeker (mboeker) wrote :

Update 2: getting the dbus error again. I don't know what changed between the last deployment and this one.

Changes from default:

$KAYOBE_BASE_PATH/kayobe-env: base_path=$(realpath $KAYOBE_CONFIG_ROOT/)

inventory/groups: removed seed from [docker-registry:children]

inventory/hosts: under [seed] I removed seed and added "kayobevm ansible_connection=local"

Here's my env:

(kayobe) ubuntu@kayobevm:~/demo/etc/kayobe$ env | grep 'KAY\|KOL' | sort
KAYOBE_BASE_PATH=/home/ubuntu/demo
KAYOBE_CONFIG_PATH=/home/ubuntu/demo/etc/kayobe
KOLLA_CONFIG_PATH=/home/ubuntu/demo/etc/kolla
KOLLA_SOURCE_PATH=/home/ubuntu/demo/src/kolla-ansible
KOLLA_VENV_PATH=/home/ubuntu/demo/venvs/kolla-ansible

It's complaining about dbus missing again, and I can't figure out what change is breaking it.

Revision history for this message
Will Szumski (willjs) wrote :

It looks like kayobe is using the system python interpreter. Normally when you do host configure it will set up the kolla ansible venv:

https://github.com/openstack/kayobe/blob/595fe87e863f893e9e79b634843f5c85d21e14a3/ansible/inventory/group_vars/all/kolla#L455

Then when you run service deploy it will use that venv and you should have the python dbus bindings.

Revision history for this message
Will Szumski (willjs) wrote :

> I checked and the KOLLA_VENV_PATH is really just that. I see in kayobe-env it's set to be the grandparent of $KAYOBE_CONFIG_ROOT, but that doesn't make sense, at least not in my setup (my setup has KAYOBE_CONFIG_ROOT as ~/demo/)

It does tend to assume a layout like this:

[stack@kvm ~]$ tree ~/kayobe-env-2023.1/ -L 2
/home/stack/kayobe-env-2023.1/
├── env-vars.sh
├── src
│   ├── kayobe
│   ├── kayobe-config
│   └── kolla-ansible
└── venvs
    ├── kayobe
    └── kolla-ansible

Revision history for this message
Martin Ananda Boeker (mboeker) wrote :

Okay, I will look into modifying my structure. Is env-vars.sh a custom alternative to kayobe-env?

On another note:

I tried setting up Kayobe again from scratch and only modifying etc/kayobe/* as little as possible for what is needed to run `seed host configure && seed service deploy` and only making one change in hosts/groups, which is to put kayobevm in the [seed] group, and bifrost built without problems. So the issue must be somewhere in etc/kayobe/*.yml.

Re: "It looks like kayobe is using the system python interpreter." -- what's really interesting is that if I do a pip list on the system, python dbus is there. But it's not present in either the kayobe or kolla-ansible venvs.

Revision history for this message
Will Szumski (willjs) wrote :

> Okay, I will look into modifying my structure. Is env-vars.sh a custom alternative to kayobe-env?

That is generated by this script: https://github.com/stackhpc/beokay. It sets a few environment variables to make activating the environment easier.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.