scripts/bootstrap-ansible.sh failed due to pip version

Bug #1913204 reported by phonix6
4
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Fix Released
High
Unassigned

Bug Description

Hi Folks,

It seems that there is some kind of incompatibility in the dependancies in CentOS 8.2 and the OpenStack ansible:
I have installed the deployment guide as suggested in https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html

While running the script scripts/bootstrap-ansible.sh i will get the following errors:

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

CentOS 8.3 is not actually compatible with 8.2 as they have different repo names and some other things, that made us drop 8.2 support in favor of 8.3. However without actually errors it's hard to understand if it is the case for you.
However if you followed the guide, dnf upgrade should have brought you 8.3 anyway.

So please provide us with the actual errors you've faced during deployment.

Changed in openstack-ansible:
status: New → Incomplete
Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

Thanks for answering.

Here is the info, in the attachment.

yes that's right, i have upgraded the 8.2 to 8.3.2011

KR,

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (5.0 KiB)

I believe that it could be related to the python installation.
I have the:
python36-devel.x86_64 3.6.8-2.module_el8.3.0+562+e162826a @appstream

After cloning the git from:
git clone -b master https://opendev.org/openstack/openstack-ansible /opt/openstack-ansible

and trying to run the integrity checker:
openstack-ansible setup-infrastructure.yml --syntax-check

I get the following:

[root@srv-infra playbooks]# openstack-ansible setup-infrastructure.yml --syntax-check
-bash: openstack-ansible: command not found

I really cannot understand this, the git from openstack-ansible is defenitly cloned.
So it's either the pip package or the python package.

I have tried to remove the python, but the same version will be installed after.

[root@srv-infra playbooks]# dnf remove python36-devel.x86_64
Dependencies resolved.
=====================================================================================================================================================================================================================================
 Package Architecture Version Repository Size
=====================================================================================================================================================================================================================================
Removing:
 python36-devel x86_64 3.6.8-2.module_el8.3.0+562+e162826a @appstream 13 k
Removing unused dependencies:
 platform-python-devel x86_64 3.6.8-31.el8 @appstream 696 k
 python-rpm-macros noarch 3-39.el8 @appstream 3.3 k
 python3-rpm-generators noarch 5-6.el8 @appstream 55 k

Transaction Summary
=====================================================================================================================================================================================================================================
Remove 4 Packages

Freed space: 767 k
Is this ok [y/N]:

[root@srv-infra playbooks]# dnf install python3-devel
Last metadata expiration check: 0:35:29 ago on Tue 26 Jan 2021 11:05:06 AM CET.
Dependencies resolved.
=====================================================================================================================================================================================================================================
 Package ...

Read more...

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

At the moment you failed to bootstrap openstack-ansible because of the bug. Patch https://review.opendev.org/c/openstack/openstack-ansible/+/772495 should cover that. Actually you can just run
dnf install rsync

and after that re-run ./scripts/bootstrap-ansible.sh

openstack-ansible binary is being installed with this script, so until it finish with success you won't have openstack-ansible binary.

You can also join #openstack-ansible IRC channel on Freenode for more help if needed.

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

If we go futher, openstack-ansible is actually a pretty simple wrapper around ansible-playbook and ansible binaries which source code is https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/openstack-ansible.sh

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

Thanks for the input!

It really was the case, NOTABUG, but definitely a documentation inconsistency.

While looking into the guide from here:
https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html

I do not see any instruction to install rsync package.

# dnf install https://repos.fedorapeople.org/repos/openstack/openstack-victoria/rdo-release-victoria.el8.rpm
# dnf install git chrony openssh-server python3-devel sudo
# dnf group install "Development Tools"

Changed in openstack-ansible:
status: Incomplete → In Progress
Changed in openstack-ansible:
importance: Undecided → High
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible 22.0.1

This issue was fixed in the openstack/openstack-ansible 22.0.1 release.

Changed in openstack-ansible:
status: In Progress → Fix Released
Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (10.6 KiB)

I am having the issue with 22.0.1 rc release.

I am unable to run the playbook, what is wrong with that?
I am using ubuntu 20.4 now, I am getting:

root@iaas-infra:/opt/openstack-ansible/playbooks# openstack-ansible setup-infrastructure.yml --syntax-check
openstack-ansible: command not found

root@iaas-infra:/opt/openstack-ansible/playbooks# ls -l
total 328
-rw-r--r-- 1 root root 4452 Feb 13 12:52 ceph-install.yml
-rw-r--r-- 1 root root 1840 Feb 13 12:52 ceph-nfs-install.yml
-rw-r--r-- 1 root root 2265 Feb 13 12:52 ceph-rgw-install.yml
-rw-r--r-- 1 root root 3724 Feb 13 12:52 ceph-rgw-keystone-setup.yml
drwxr-xr-x 2 root root 4096 Feb 13 12:52 common-playbooks
drwxr-xr-x 2 root root 4096 Feb 13 12:52 common-tasks
-rw-r--r-- 1 root root 778 Feb 13 12:52 containers-deploy.yml
-rw-r--r-- 1 root root 2700 Feb 13 12:52 containers-lxc-create.yml
-rw-r--r-- 1 root root 2715 Feb 13 12:52 containers-lxc-destroy.yml
-rw-r--r-- 1 root root 2210 Feb 13 12:52 containers-lxc-host.yml
-rw-r--r-- 1 root root 3564 Feb 13 12:52 containers-nspawn-create.yml
-rw-r--r-- 1 root root 3546 Feb 13 12:52 containers-nspawn-destroy.yml
-rw-r--r-- 1 root root 1056 Feb 13 12:52 containers-nspawn-host.yml
drwxr-xr-x 3 root root 4096 Feb 13 12:52 defaults
-rw-r--r-- 1 root root 1273 Feb 13 12:52 etcd-install.yml
-rw-r--r-- 1 root root 2089 Feb 13 12:52 galera-install.yml
-rw-r--r-- 1 root root 1887 Feb 13 12:52 haproxy-install.yml
-rw-r--r-- 1 root root 3578 Feb 13 12:52 healthcheck-hosts.yml
-rw-r--r-- 1 root root 11350 Feb 13 12:52 healthcheck-infrastructure.yml
-rw-r--r-- 1 root root 18215 Feb 13 12:52 healthcheck-openstack.yml
-rw-r--r-- 1 root root 4494 Feb 13 12:52 infra-journal-remote.yml
drwxr-xr-x 2 root root 4096 Feb 13 12:52 library
-rw-r--r-- 1 root root 1160 Feb 13 12:52 listening-port-report.yml
lrwxrwxrwx 1 root root 25 Feb 13 12:52 lxc-containers-create.yml -> containers-lxc-create.yml
lrwxrwxrwx 1 root root 26 Feb 13 12:52 lxc-containers-destroy.yml -> containers-lxc-destroy.yml
lrwxrwxrwx 1 root root 23 Feb 13 12:52 lxc-hosts-setup.yml -> containers-lxc-host.yml
-rw-r--r-- 1 root root 1265 Feb 13 12:52 memcached-install.yml
-rw-r--r-- 1 root root 2959 Feb 13 12:52 openstack-hosts-setup.yml
-rw-r--r-- 1 root root 2145 Feb 13 12:52 os-adjutant-install.yml
-rw-r--r-- 1 root root 1302 Feb 13 12:52 os-aodh-install.yml
-rw-r--r-- 1 root root 1328 Feb 13 12:52 os-barbican-install.yml
-rw-r--r-- 1 root root 1381 Feb 13 12:52 os-blazar-install.yml
-rw-r--r-- 1 root root 1338 Feb 13 12:52 os-ceilometer-install.yml
-rw-r--r-- 1 root root 6840 Feb 13 12:52 os-cinder-install.yml
-rw-r--r-- 1 root root 1439 Feb 13 12:52 os-designate-install.yml
-rw-r--r-- 1 root root 6345 Feb 13 12:52 os-glance-install.yml
-rw-r--r-- 1 root root 1808 Feb 13 12:52 os-gnocchi-install.yml
-rw-r--r-- 1 root root 1363 Feb 13 12:52 os-heat-install.yml
-rw-r--r-- 1 root root 1382 Feb 13 12:52 os-horizon-install.yml
-rw-r--r-- 1 root root 1385 Feb 13 12:52 os-ironic-install.yml
-rw-r--r-- 1 root root 6103 Feb 13 12:52 os-keystone-install.yml
-rw-r--r-- 1 root root 1422 Feb 13 12:52 os-magnum-install.yml
-rw-r--r-- 1 root root 2628 Feb 13 12:52 os-...

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

Anyone knows why I’m getting “ openstack-ansible: command not found” even tough the openstack-ansible is cloned to the right location?

Thanks,

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

It might be only, if run of the ./scripts/bootstrap-ansible.sh has failed before installing required binaries. You should re-run this command and check that it has return code 0.

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (5.8 KiB)

Thx,

But I'm still having the same results:

root@iaas-infra:/opt/openstack-ansible/scripts# bootstrap-ansible.sh
bootstrap-ansible.sh: command not found

root@iaas-infra:/opt/openstack-ansible/scripts# cd ..
root@iaas-infra:/opt/openstack-ansible# scripts/bootstrap-ansible.sh
+ export HTTP_PROXY=
+ HTTP_PROXY=
+ export HTTPS_PROXY=
+ HTTPS_PROXY=
+ export ANSIBLE_PACKAGE=ansible-base==2.10.3
+ ANSIBLE_PACKAGE=ansible-base==2.10.3
+ export ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ export ANSIBLE_COLLECTION_FILE=ansible-collection-requirements.yml
+ ANSIBLE_COLLECTION_FILE=ansible-collection-requirements.yml
+ export USER_ROLE_FILE=user-role-requirements.yml
+ USER_ROLE_FILE=user-role-requirements.yml
+ export USER_COLLECTION_FILE=user-collection-requirements.yml
+ USER_COLLECTION_FILE=user-collection-requirements.yml
+ export SSH_DIR=/root/.ssh
+ SSH_DIR=/root/.ssh
+ export DEBIAN_FRONTEND=noninteractive
+ DEBIAN_FRONTEND=noninteractive
+ export SETUP_ARA=false
+ SETUP_ARA=false
+ export PIP_OPTS=
+ PIP_OPTS=
+ export OSA_WRAPPER_BIN=scripts/openstack-ansible.sh
+ OSA_WRAPPER_BIN=scripts/openstack-ansible.sh
++ dirname scripts/bootstrap-ansible.sh
+ cd scripts/..
+ info_block 'Checking for required libraries.'
+ source scripts/scripts-library.sh
++ LINE=----------------------------------------------------------------------
++ ANSIBLE_PARAMETERS=
+++ date +%s
++ STARTTIME=1613310059
++ COMMAND_LOGS=/openstack/log/ansible_cmd_logs
++ PIP_COMMAND=/opt/ansible-runtime/bin/pip
++ ZUUL_PROJECT=
++ GATE_EXIT_LOG_COPY=false
++ GATE_EXIT_LOG_GZIP=true
++ GATE_EXIT_RUN_ARA=true
++ GATE_EXIT_RUN_DSTAT=true
++ [[ -n '' ]]
++ '[' -z '' ']'
+++ grep -c '^processor' /proc/cpuinfo
++ CPU_NUM=4
++ '[' 4 -lt 10 ']'
++ ANSIBLE_FORKS=4
++ trap 'exit_fail 404 0 '\''Received STOP Signal'\''' SIGHUP SIGINT SIGTERM
++ trap 'exit_fail 405 0' ERR
+++ id -u
++ '[' 0 '!=' 0 ']'
++ '[' '!' -d etc -a '!' -d scripts -a '!' -d playbooks ']'
++ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
++ export HOME=/root
++ HOME=/root
++ [[ -f /usr/local/bin/openstack-ansible.rc ]]
+ info_block 'Bootstrapping System with Ansible'
+ echo ----------------------------------------------------------------------
----------------------------------------------------------------------
+ print_info 'Bootstrapping System with Ansible'
+ PROC_NAME='- [ Bootstrapping System with Ansible ] -'
+ printf '\n%s%s\n' '- [ Bootstrapping System with Ansible ] -' -----------------------------

- [ Bootstrapping System with Ansible ] ------------------------------
+ echo ----------------------------------------------------------------------
----------------------------------------------------------------------
++ pwd
+ export OSA_CLONE_DIR=/opt/openstack-ansible
+ OSA_CLONE_DIR=/opt/openstack-ansible
++ readlink -f ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=/opt/openstack-ansible/ansible-role-requirements.yml...

Read more...

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

Eventually this is because of the wrong repo that has been added to the deploy host:

E: The repository 'http://ppa.launchpad.net/chris-lea/munin-plugins/ubuntu focal Release' does not have a Release file.

. We don't install such repo and it's smth that was installed by you for this host. You should remove all non working repos from the host and ideally use clean system from the deploy host.

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

Weird...

I have used the official guide.
I have run over the test environment and started again with centos 8.
It would be good to move away from centos, as it not going to last after 2022.

Thx,

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

I'd recommend using Ubuntu/Debian indeed, as the issue you had is totally related to third party ppa that has been added somehow. From what I see in url, it's related to munin monitoring tool.

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (11.1 KiB)

Unfortunately i am getting the error even in the new environment under Centos 8.
All installed and updated today:

TASK [Append user overridden roles] *************************************************************************************************************************************************************************************************
[WARNING]: Unable to find '/etc/openstack_deploy/user-role-requirements.yml' in expected paths (use -vvvvv to see paths)
ok: [localhost]

TASK [Clone git repos (parallel)] ***************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: Value of unknown type: <class 'multiprocessing.managers.ListProxy'>, [["Failed to reset /etc/ansible/roles/ceph-ansible\nCmd('git') failed due to: exit code(128)\n cmdline: git reset --hard 7d088320df1c4a6ed458866c61616a21fddccfe8\n stderr: 'fatal: Could not parse object '7d088320df1c4a6ed458866c61616a21fddccfe8'.'"], ["Failed to reset /etc/ansible/roles/ceph-ansible\nCmd('git') failed due to: exit code(129)\n cmdline: git reset --force --hard 7d088320df1c4a6ed458866c61616a21fddccfe8\n stderr: 'error: unknown option `force'\nusage: git reset [--mixed | --soft | --hard | --merge | --keep] [-q] [<commit>]\n or: git reset [-q] [<tree-ish>] [--] <pathspec>...\n or: git reset [-q] [--pathspec-from-file [--pathspec-file-nul]] [<tree-ish>]\n or: git reset --patch [<tree-ish>] [--] [<pathspec>...]\n\n -q, --quiet be quiet, only report errors\n --mixed reset HEAD and index\n --soft reset only HEAD\n --hard reset HEAD, index and working tree\n --merge reset HEAD, index and working tree\n --keep reset HEAD but keep local changes\n --recurse-submodules[=<reset>]\n control recursive updating of submodules\n -p, --patch select hunks interactively\n -N, --intent-to-add record only the fact that removed paths will be added later\n --pathspec-from-file <file>\n read pathspec from file\n --pathspec-file-nul with --pathspec-from-file, pathspec elements are separated with NUL character\n'"], ["Role {'name': 'ceph-ansible', 'scm': 'git', 'src': 'https://github.com/ceph/ceph-ansible', 'version': '7d088320df1c4a6ed458866c61616a21fddccfe8', 'trackbranch': 'stable-5.0', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 10, 'dest': '/etc/ansible/roles/ceph-ansible'} failed after 2 retries\n"]]
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1613395122.0665104-7229-201694025777480/AnsiballZ_git_requirements.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1613395122.0665104-7229-201694025777480/AnsiballZ_git_requirements.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansi...

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

Eventually this task should have fallback to non-parallel clone process which is not critical and the process should proceed

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

What version are you checking out? I think we should have covered this issue and released fix with 22.0.1 which is https://opendev.org/openstack/openstack-ansible/commit/3512f89b98b71f8ab1eaab27becc04f0534bea5a

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

I cloned the git 22.0.0.0rc1 from the Victoria release.
I follow the guide in:
https://docs.openstack.org/project-deploy-guide/openstack-ansible/victoria/deploymenthost.html

Thx

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

This doc supposed to get latest available tag from the repo but this mechanism is broken at the moment. You should checkout to 22.0.1 as it's a stable release.

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (10.6 KiB)

I checked 22.0.1, but now the bootstrap is not completing as follows:
git clone https://git.openstack.org/openstack/openstack-ansible \
    /opt/openstack-ansible

cd /opt/openstack-ansible

# ext switch the applicable branch/tag to be deployed from. Note that deploying from the head of a branch may result in an unstable build due to changes in flight and upstream OpenStack changes. For a test (ie not a development) build it is usually best to checkout the latest tagged version.

git tag -l

git checkout stable/victoria
git describe --abbrev=0 --tags

git checkout 20.0.1

[root@iaas-infra openstack-ansible]# scripts/bootstrap-ansible.sh
+ export HTTP_PROXY=
+ HTTP_PROXY=
+ export HTTPS_PROXY=
+ HTTPS_PROXY=
+ export ANSIBLE_PACKAGE=ansible==2.8.5
+ ANSIBLE_PACKAGE=ansible==2.8.5
+ export ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ export USER_ROLE_FILE=user-role-requirements.yml
+ USER_ROLE_FILE=user-role-requirements.yml
+ export SSH_DIR=/root/.ssh
+ SSH_DIR=/root/.ssh
+ export DEBIAN_FRONTEND=noninteractive
+ DEBIAN_FRONTEND=noninteractive
+ export SETUP_ARA=false
+ SETUP_ARA=false
+ export PIP_OPTS=
+ PIP_OPTS=
+ export OSA_WRAPPER_BIN=scripts/openstack-ansible.sh
+ OSA_WRAPPER_BIN=scripts/openstack-ansible.sh
++ dirname scripts/bootstrap-ansible.sh
+ cd scripts/..
+ info_block 'Checking for required libraries.'
+ source scripts/scripts-library.sh
++ LINE=----------------------------------------------------------------------
++ ANSIBLE_PARAMETERS=
+++ date +%s
++ STARTTIME=1613409281
++ COMMAND_LOGS=/openstack/log/ansible_cmd_logs
++ PIP_COMMAND=/opt/ansible-runtime/bin/pip
++ ZUUL_PROJECT=
++ GATE_EXIT_LOG_COPY=false
++ GATE_EXIT_LOG_GZIP=true
++ GATE_EXIT_RUN_ARA=true
++ GATE_EXIT_RUN_DSTAT=true
++ [[ -n '' ]]
++ '[' -z '' ']'
+++ grep -c '^processor' /proc/cpuinfo
++ CPU_NUM=4
++ '[' 4 -lt 10 ']'
++ ANSIBLE_FORKS=4
++ trap 'exit_fail 398 0 '\''Received STOP Signal'\''' SIGHUP SIGINT SIGTERM
++ trap 'exit_fail 399 0' ERR
+++ id -u
++ '[' 0 '!=' 0 ']'
++ '[' '!' -d etc -a '!' -d scripts -a '!' -d playbooks ']'
++ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
++ export HOME=/root
++ HOME=/root
++ [[ -f /usr/local/bin/openstack-ansible.rc ]]
+ info_block 'Bootstrapping System with Ansible'
+ echo ----------------------------------------------------------------------
----------------------------------------------------------------------
+ print_info 'Bootstrapping System with Ansible'
+ PROC_NAME='- [ Bootstrapping System with Ansible ] -'
+ printf '\n%s%s\n' '- [ Bootstrapping System with Ansible ] -' -----------------------------

- [ Bootstrapping System with Ansible ] ------------------------------
+ echo ----------------------------------------------------------------------
----------------------------------------------------------------------
++ pwd
+ export OSA_CLONE_DIR=/opt/openstack-ansible
+ OSA_CLONE_DIR=/opt/openstack-ansible
++ readlink -f ansible-role-re...

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (10.1 KiB)

[root@iaas-infra openstack-ansible]# scripts/bootstrap-ansible.sh
+ export HTTP_PROXY=
+ HTTP_PROXY=
+ export HTTPS_PROXY=
+ HTTPS_PROXY=
+ export ANSIBLE_PACKAGE=ansible==2.8.5
+ ANSIBLE_PACKAGE=ansible==2.8.5
+ export ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ export USER_ROLE_FILE=user-role-requirements.yml
+ USER_ROLE_FILE=user-role-requirements.yml
+ export SSH_DIR=/root/.ssh
+ SSH_DIR=/root/.ssh
+ export DEBIAN_FRONTEND=noninteractive
+ DEBIAN_FRONTEND=noninteractive
+ export SETUP_ARA=false
+ SETUP_ARA=false
+ export PIP_OPTS=
+ PIP_OPTS=
+ export OSA_WRAPPER_BIN=scripts/openstack-ansible.sh
+ OSA_WRAPPER_BIN=scripts/openstack-ansible.sh
++ dirname scripts/bootstrap-ansible.sh
+ cd scripts/..
+ info_block 'Checking for required libraries.'
+ source scripts/scripts-library.sh
++ LINE=----------------------------------------------------------------------
++ ANSIBLE_PARAMETERS=
+++ date +%s
++ STARTTIME=1613409281
++ COMMAND_LOGS=/openstack/log/ansible_cmd_logs
++ PIP_COMMAND=/opt/ansible-runtime/bin/pip
++ ZUUL_PROJECT=
++ GATE_EXIT_LOG_COPY=false
++ GATE_EXIT_LOG_GZIP=true
++ GATE_EXIT_RUN_ARA=true
++ GATE_EXIT_RUN_DSTAT=true
++ [[ -n '' ]]
++ '[' -z '' ']'
+++ grep -c '^processor' /proc/cpuinfo
++ CPU_NUM=4
++ '[' 4 -lt 10 ']'
++ ANSIBLE_FORKS=4
++ trap 'exit_fail 398 0 '\''Received STOP Signal'\''' SIGHUP SIGINT SIGTERM
++ trap 'exit_fail 399 0' ERR
+++ id -u
++ '[' 0 '!=' 0 ']'
++ '[' '!' -d etc -a '!' -d scripts -a '!' -d playbooks ']'
++ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
++ export HOME=/root
++ HOME=/root
++ [[ -f /usr/local/bin/openstack-ansible.rc ]]
+ info_block 'Bootstrapping System with Ansible'
+ echo ----------------------------------------------------------------------
----------------------------------------------------------------------
+ print_info 'Bootstrapping System with Ansible'
+ PROC_NAME='- [ Bootstrapping System with Ansible ] -'
+ printf '\n%s%s\n' '- [ Bootstrapping System with Ansible ] -' -----------------------------

- [ Bootstrapping System with Ansible ] ------------------------------
+ echo ----------------------------------------------------------------------
----------------------------------------------------------------------
++ pwd
+ export OSA_CLONE_DIR=/opt/openstack-ansible
+ OSA_CLONE_DIR=/opt/openstack-ansible
++ readlink -f ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=/opt/openstack-ansible/ansible-role-requirements.yml
++ readlink -f inventory
+ OSA_INVENTORY_PATH=/opt/openstack-ansible/inventory
++ readlink -f playbooks
+ OSA_PLAYBOOK_PATH=/opt/openstack-ansible/playbooks
+ ssh_key_create
+ key_path=/root/.ssh
+ key_file=/root/.ssh/id_rsa
+ '[' '!' -d /root/.ssh ']'
+ '[' '!' -f /root/.ssh/id_rsa -o '!' -f /root/.ssh/id_rsa.pub ']'
++ cat /root/.ssh/id_rsa.pub
+ key_content='ssh-rsa '
+ grep -q 'ssh-rsa ' /root/.ssh/authorized_keys
+ determine_distro
+ source /etc/os-release
+ export DISTRO_ID=centos
+ DIS...

Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

Wait, it seems you're currently on 20.0.1 and not 22.0.1. 20.0.1 is train release which does not have support of the CentOS 8.

You can join us in IRC on Freenode at #openstack-ansible for further community support.

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (3.2 KiB)

thx,

oops i didn't noticed. i changed to 22.0.1.

atal: [infra1_cinder_api_container-f4d803c7 -> 172.29.236.11]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["lxc-start", "--daemon", "--name", "infra1_cinder_api_container-f4d803c7", "--logfile", "/var/log/lxc/lxc-infra1_cinder_api_container-f4d803c7.log", "--logpriority", "INFO"], "delta": "0:00:00.138456", "end": "2021-02-15 20:42:49.798963", "msg": "non-zero return code", "rc": 1, "start": "2021-02-15 20:42:49.660507", "stderr": "lxc-start: infra1_cinder_api_container-f4d803c7: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"\nlxc-start: infra1_cinder_api_container-f4d803c7: tools/lxc_start.c: main: 329 The container failed to start\nlxc-start: infra1_cinder_api_container-f4d803c7: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode\nlxc-start: infra1_cinder_api_container-f4d803c7: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options", "stderr_lines": ["lxc-start: infra1_cinder_api_container-f4d803c7: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"", "lxc-start: infra1_cinder_api_container-f4d803c7: tools/lxc_start.c: main: 329 The container failed to start", "lxc-start: infra1_cinder_api_container-f4d803c7: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode", "lxc-start: infra1_cinder_api_container-f4d803c7: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options"], "stdout": "", "stdout_lines": []}
fatal: [infra1_glance_container-c1fd36c0 -> 172.29.236.11]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["lxc-start", "--daemon", "--name", "infra1_glance_container-c1fd36c0", "--logfile", "/var/log/lxc/lxc-infra1_glance_container-c1fd36c0.log", "--logpriority", "INFO"], "delta": "0:00:00.133058", "end": "2021-02-15 20:42:51.972123", "msg": "non-zero return code", "rc": 1, "start": "2021-02-15 20:42:51.839065", "stderr": "lxc-start: infra1_glance_container-c1fd36c0: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"\nlxc-start: infra1_glance_container-c1fd36c0: tools/lxc_start.c: main: 329 The container failed to start\nlxc-start: infra1_glance_container-c1fd36c0: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode\nlxc-start: infra1_glance_container-c1fd36c0: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options", "stderr_lines": ["lxc-start: infra1_glance_container-c1fd36c0: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"", "lxc-start: infra1_glance_container-c1fd36c0: tools/lxc_start.c: main: 329 The container failed to start", "lxc-start: infra1_glance_container-c1fd36c0: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode", "lxc-start: infra1_glance_container-c1fd36c0: tools/lxc_start.c: main: 335 Additional informatio...

Read more...

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

PLAY RECAP **************************************************************************************************************************************************************************************************************************
compute1 : ok=36 changed=21 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0
infra1 : ok=129 changed=67 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0
infra1_cinder_api_container-f4d803c7 : ok=51 changed=31 unreachable=0 failed=9 skipped=3 rescued=0 ignored=0
infra1_galera_container-f3f3a747 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_glance_container-c1fd36c0 : ok=51 changed=31 unreachable=0 failed=9 skipped=3 rescued=0 ignored=0
infra1_heat_api_container-4860015b : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_horizon_container-2b5927e9 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_keystone_container-bf413f90 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_memcached_container-e9293432 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_neutron_server_container-822db607 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_nova_api_container-5a395368 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_placement_container-1fcc5b8b : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_rabbit_mq_container-27196009 : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_repo_container-fd3adedf : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
infra1_utility_container-34fc087f : ok=60 changed=36 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
storage1 : ok=36 changed=21 unreachable=0 failed=0 skipped=7 rescued=0 ignored

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :

I think I understand what is wrong here.

I am running on ESXI hypervisor, the nics I have are as follows:
ens160 mgmt
ens192 vxlan
ens224 storage

I also Setup the bridges like the guide states.

The only difference is that I do not use Vlan interfaces at all e.g ens160.10, ens192.20, and so on.
In my case I have a virtual switch that takes care of the vlan tagging.
so I do VST (virtual switch tagging) and not VGT (virtual guest tagging).

I think strongly believe that the ansible book are failing due to this.

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (3.9 KiB)

I have setup the interfaces to tagging as guided, but unfortunately still the same error:

fatal: [infra1_cinder_api_container-a65c2560]: FAILED! => {"attempts": 5, "changed": false, "module_stderr": "lxc-attach: infra1_cinder_api_container-a65c2560: attach.c: lxc_attach: 1042 Failed to get init pid\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
fatal: [infra1_glance_container-fcc7b2b2]: FAILED! => {"attempts": 5, "changed": false, "module_stderr": "lxc-attach: infra1_glance_container-fcc7b2b2: attach.c: lxc_attach: 1042 Failed to get init pid\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1}

fatal: [infra1_cinder_api_container-a65c2560 -> 172.29.236.11]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["lxc-start", "--daemon", "--name", "infra1_cinder_api_container-a65c2560", "--logfile", "/var/log/lxc/lxc-infra1_cinder_api_container-a65c2560.log", "--logpriority", "INFO"], "delta": "0:00:00.146285", "end": "2021-02-16 18:37:40.312453", "msg": "non-zero return code", "rc": 1, "start": "2021-02-16 18:37:40.166168", "stderr": "lxc-start: infra1_cinder_api_container-a65c2560: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"\nlxc-start: infra1_cinder_api_container-a65c2560: tools/lxc_start.c: main: 329 The container failed to start\nlxc-start: infra1_cinder_api_container-a65c2560: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode\nlxc-start: infra1_cinder_api_container-a65c2560: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options", "stderr_lines": ["lxc-start: infra1_cinder_api_container-a65c2560: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"", "lxc-start: infra1_cinder_api_container-a65c2560: tools/lxc_start.c: main: 329 The container failed to start", "lxc-start: infra1_cinder_api_container-a65c2560: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode", "lxc-start: infra1_cinder_api_container-a65c2560: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options"], "stdout": "", "stdout_lines": []}
fatal: [infra1_glance_container-fcc7b2b2 -> 172.29.236.11]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["lxc-start", "--daemon", "--name", "infra1_glance_container-fcc7b2b2", "--logfile", "/var/log/lxc/lxc-infra1_glance_container-fcc7b2b2.log", "--logpriority", "INFO"], "delta": "0:00:00.159931", "end": "2021-02-16 18:37:40.387650", "msg": "non-zero return code", "rc": 1, "start": "2021-02-16 18:37:40.227719", "stderr": "lxc-start: infra1_glance_container-fcc7b2b2: lxccontainer.c: wait_on_daemonized_start: 851 Received container state \"ABORTING\" instead of \"RUNNING\"\nlxc-start: infra1_glance_container-fcc7b2b2: tools/lxc_start.c: main: 329 The container failed to start\nlxc-start: infra1_glance_container-fcc7b2b2: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode\nlxc-start: infra1_glance...

Read more...

Revision history for this message
phonix6 (phonix6-deactivatedaccount-deactivatedaccount) wrote :
Download full text (44.3 KiB)

here are the logs, it seems that some ova pics are not coming up:

lxc-start infra1_glance_container-7d4b3bb6 20210216181454.863 INFO lxccontainer - lxccontainer.c:do_lxcapi_start:971 - Set process title to [lxc monitor] /var/lib/lxc infra1_glance_container-7d4b3bb6
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO lsm - lsm/lsm.c:lsm_init:50 - LSM security driver SELinux
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "reject_force_umount # comment this to allow umount -f; not recommended"
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for reject_force_umount action 0(kill)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "[all]"
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "kexec_load errno 1"
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for kexec_load action 327681(errno)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
lxc-start infra1_glance_container-7d4b3bb6 20210216181454.865 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "open_by_handle_at errno 1"
lxc-start infra1_glance_container-7d4b3bb6 20210...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible 23.0.0.0b1

This issue was fixed in the openstack/openstack-ansible 23.0.0.0b1 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.