ERROR: 170768 -- ERROR configuring rabbit_init_tasks

Bug #2029476 reported by umamaheswar Reddy
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tripleo
New
Undecided
Unassigned

Bug Description

ERROR: 171001 -- ['/usr/bin/podman', 'run', '--user', '0', '--name', 'container-puppet-rabbit_init_tasks', '--env', 'PUPPET_TAGS=file,file_line,concat,augeas,cron,rabbitmq_policy,rabbitmq_user', '--env', 'NAME=rabbit_init_tasks', '--env', 'HOSTNAME=tplosp-cntl-s01', '--env', 'NO_ARCHIVE=true', '--env', 'STEP=2', '--env', 'NET_HOST=true', '--env', 'DEBUG=False', '--volume', '/etc/localtime:/etc/localtime:ro', '--volume', '/tmp/tmpg6i8unbv:/etc/config.pp:ro', '--volume', '/etc/puppet/:/tmp/puppet-etc/:ro', '--volume', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '--volume', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '--volume', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '--volume', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '--volume', '/var/lib/config-data:/var/lib/config-data/:rw', '--volume', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '--volume', '/var/lib/container-puppet/puppetlabs/:/opt/puppetlabs/:ro', '--volume', '/dev/log:/dev/log:rw', '--rm', '--log-driver', 'k8s-file', '--log-opt', 'path=/var/log/containers/stdouts/container-puppet-rabbit_init_tasks.log', '--security-opt', 'label=disable', '--volume', '/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', '--volume', '/var/lib/config-data/rabbitmq/etc/rabbitmq/:/etc/rabbitmq/:ro', '--volume', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '--entrypoint', '/var/lib/container-puppet/container-puppet.sh', '--net', 'host', '--volume', '/etc/hosts:/etc/hosts:ro', '--volume', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', 'tclosp.ctlplane.localdomain:8787/tripleotraincentos8/centos-binary-rabbitmq:current-tripleo'] run failed after + mkdir -p /etc/puppet", "+ cp -dR /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hieradata /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", "+ rm -Rf /etc/puppet/ssl", "+ echo '{\"step\": 2}'", "+ TAGS=", "+ '[' -n file,file_line,concat,augeas,cron,rabbitmq_policy,rabbitmq_user ']'", "+ TAGS='--tags \"file,file_line,concat,augeas,cron,rabbitmq_policy,rabbitmq_user\"'", "+ '[' '!' -z ']'", "+ CHECK_MODE=", "+ '[' -d /tmp/puppet-check-mode ']'", "+ conf_data_path=/var/lib/config-data/rabbit_init_tasks", "+ origin_of_time=/var/lib/config-data/rabbit_init_tasks.origin_of_time", "+ touch /var/lib/config-data/rabbit_init_tasks.origin_of_time", "+ sync", "+ export NET_HOST=true", "+ NET_HOST=true", "+ set +e", "+ '[' true == false ']'", "+ export FACTER_deployment_type=containers", "+ FACTER_deployment_type=containers", "++ cat /sys/class/dmi/id/product_uuid", "++ tr '[:upper:]' '[:lower:]'", "+ export FACTER_uuid=4c4c4544-0052-5310-8043-b8c04f383432", "+ FACTER_uuid=4c4c4544-0052-5310-8043-b8c04f383432", "+ echo 'Running puppet'", "+ set -x", "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags '\"file,file_line,concat,augeas,cron,rabbitmq_policy,rabbitmq_user\"' /etc/config.pp", "+ logger -s -t puppet-user", "<13>Aug 2 21:17:18 puppet-user: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5", "<13>Aug 2 21:17:18 puppet-user: (file: /etc/puppet/hiera.yaml)", "<13>Aug 2 21:17:18 puppet-user: Warning: Undefined variable '::deploy_config_name'; \\n (file & line not available)", "<13>Aug 2 21:17:18 puppet-user: Warning: ModuleLoader: module 'tripleo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules\\n (file & line not available)", "<13>Aug 2 21:17:18 puppet-user: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/5.5/deprecated_language.html\\n (file & line not available)", "<13>Aug 2 21:17:18 puppet-user: Notice: Compiled catalog for tplosp-cntl-s01.localdomain in environment production in 1.12 seconds", "<13>Aug 2 21:17:18 puppet-user: Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", "<13>Aug 2 21:17:18 puppet-user: Notice: /Stage[main]/Rabbitmq::Config/Systemd::Service_limits[rabbitmq-server.service]/Systemd::Dropin_file[rabbitmq-server.service-90-limits.conf]/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", "<13>Aug 2 21:17:18 puppet-user: Notice: /Stage[main]/Rabbitmq::Config/Systemd::Service_limits[rabbitmq-server.service]/Systemd::Dropin_file[rabbitmq-server.service-90-limits.conf]/File[/etc/systemd/system/rabbitmq-server.service.d/90-limits.conf]/ensure: defined content as '{md5}0f5523d08a1441f44070144708e774c4'", "<13>Aug 2 21:17:18 puppet-user: Error: /Stage[main]/Tripleo::Profile::Base::Rabbitmq/Rabbitmq_policy[ha-all@/]: Could not evaluate: Command is still failing after 180 seconds expired!", "<13>Aug 2 21:17:18 puppet-user: Error: Failed to apply catalog: Command is still failing after 180 seconds expired!", "<13>Aug 2 21:17:18 puppet-user: Changes:", "<13>Aug 2 21:17:18 puppet-user: Total: 3", "<13>Aug 2 21:17:18 puppet-user: Events:", "<13>Aug 2 21:17:18 puppet-user: Failure: 1", "<13>Aug 2 21:17:18 puppet-user: Success: 3", "<13>Aug 2 21:17:18 puppet-user: Total: 4", "<13>Aug 2 21:17:18 puppet-user: Resources:", "<13>Aug 2 21:17:18 puppet-user: Failed: 1", "<13>Aug 2 21:17:18 puppet-user: Changed: 3", "<13>Aug 2 21:17:18 puppet-user: Out of sync: 4", "<13>Aug 2 21:17:18 puppet-user: Skipped: 5", "<13>Aug 2 21:17:18 puppet-user: Total: 15", "<13>Aug 2 21:17:18 puppet-user: Time:", "<13>Aug 2 21:17:18 puppet-user: File: 0.01", "<13>Aug 2 21:17:18 puppet-user: Config retrieval: 1.39", "<13>Aug 2 21:17:18 puppet-user: Last run: 1691011504", "<13>Aug 2 21:17:18 puppet-user: Rabbitmq policy: 222.98", "<13>Aug 2 21:17:18 puppet-user: Total: 446.09", "<13>Aug 2 21:17:18 puppet-user: Version:", "<13>Aug 2 21:17:18 puppet-user: Config: 1691011057", "<13>Aug 2 21:17:18 puppet-user: Puppet: 5.5.10", "+ rc=1", "+ '[' False = false ']'", "+ set -e", "+ '[' 1 -ne 2 -a 1 -ne 0 ']'", "+ exit 1", " attempt(s): 3", "2023-08-02 21:25:07,883 WARNING: 171001 -- Retrying running container: rabbit_init_tasks", "2023-08-02 21:25:07,883 ERROR: 171001 -- Failed running container for rabbit_init_tasks", "2023-08-02 21:25:07,883 INFO: 171001 -- Finished processing puppet configs for rabbit_init_tasks", "2023-08-02 21:25:07,886 ERROR: 170768 -- ERROR configuring rabbit_init_tasks"]}
2023-08-02 17:25:17.514297 | 0050569a-2556-1e60-adab-000000004457 | TIMING | Wait for container-puppet tasks (bootstrap tasks) for step 2 to finish | tplosp-cntl-s01 | 0:41:36.140864 | 1419.62s

PLAY RECAP *********************************************************************
tplosp-cmpt-s01 : ok=253 changed=112 unreachable=0 failed=0 skipped=126 rescued=0 ignored=0
tplosp-cntl-s01 : ok=276 changed=126 unreachable=0 failed=1 skipped=129 rescued=0 ignored=0
tplosp-cntl-s02 : ok=278 changed=127 unreachable=0 failed=0 skipped=131 rescued=0 ignored=0
tplosp-cntl-s03 : ok=278 changed=127 unreachable=0 failed=0 skipped=131 rescued=0 ignored=0
undercloud : ok=24 changed=11 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0

2023-08-02 17:25:17.671732 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-08-02 17:25:17.672420 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 1069 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-08-02 17:25:17.672847 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: 0:41:36.299430 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-08-02 17:25:17.673310 | UUID | Info | Host | Task Name | Run Time
2023-08-02 17:25:17.673709 | 0050569a-2556-1e60-adab-000000004457 | SUMMARY | tplosp-cntl-s01 | Wait for container-puppet tasks (bootstrap tasks) for step 2 to finish | 1419.62s
2023-08-02 17:25:17.674255 | 0050569a-2556-1e60-adab-000000003bf2 | SUMMARY | tplosp-cntl-s01 | Pre-fetch all the containers | 231.87s
2023-08-02 17:25:17.674808 | 0050569a-2556-1e60-adab-000000003baa | SUMMARY | tplosp-cntl-s03 | Pre-fetch all the containers | 95.16s
2023-08-02 17:25:17.675276 | 0050569a-2556-1e60-adab-000000003a4b | SUMMARY | undercloud | Run tripleo-container-image-prepare logged to: /var/log/tripleo-container-image-prepare.log | 93.11s
2023-08-02 17:25:17.675681 | 0050569a-2556-1e60-adab-000000003b33 | SUMMARY | tplosp-cntl-s02 | Pre-fetch all the containers | 89.26s
2023-08-02 17:25:17.676103 | 0050569a-2556-1e60-adab-000000003bef | SUMMARY | tplosp-cntl-s01 | Run puppet on the host to apply IPtables rules | 86.17s
2023-08-02 17:25:17.676498 | 0050569a-2556-1e60-adab-000000003b0e | SUMMARY | tplosp-cmpt-s01 | Pre-fetch all the containers | 68.32s
2023-08-02 17:25:17.676894 | 0050569a-2556-1e60-adab-000000004196 | SUMMARY | tplosp-cntl-s01 | Wait for container-puppet tasks (generate config) to finish | 64.12s
2023-08-02 17:25:17.677355 | 0050569a-2556-1e60-adab-00000000414e | SUMMARY | tplosp-cntl-s01 | Wait for puppet host configuration to finish | 54.87s
2023-08-02 17:25:17.677728 | 0050569a-2556-1e60-adab-000000003ba7 | SUMMARY | tplosp-cntl-s03 | Run puppet on the host to apply IPtables rules | 41.18s
2023-08-02 17:25:17.678143 | 0050569a-2556-1e60-adab-000000003b30 | SUMMARY | tplosp-cntl-s02 | Run puppet on the host to apply IPtables rules | 39.22s
2023-08-02 17:25:17.678514 | 0050569a-2556-1e60-adab-000000004328 | SUMMARY | tplosp-cntl-s01 | Wait for puppet host configuration to finish | 33.23s
2023-08-02 17:25:17.678888 | 0050569a-2556-1e60-adab-000000004337 | SUMMARY | tplosp-cntl-s01 | Wait for containers to start for step 2 using paunch | 33.21s
2023-08-02 17:25:17.679309 | 0050569a-2556-1e60-adab-000000004017 | SUMMARY | tplosp-cntl-s03 | Wait for container-puppet tasks (generate config) to finish | 32.25s
2023-08-02 17:25:17.679785 | 0050569a-2556-1e60-adab-000000003fdc | SUMMARY | tplosp-cntl-s02 | Wait for container-puppet tasks (generate config) to finish | 32.24s
2023-08-02 17:25:17.680245 | 0050569a-2556-1e60-adab-000000003f8c | SUMMARY | tplosp-cntl-s03 | Wait for puppet host configuration to finish | 31.89s
2023-08-02 17:25:17.680599 | 0050569a-2556-1e60-adab-00000000428c | SUMMARY | tplosp-cmpt-s01 | Wait for containers to start for step 2 using paunch | 31.68s
2023-08-02 17:25:17.680956 | 0050569a-2556-1e60-adab-000000003f44 | SUMMARY | tplosp-cntl-s02 | Wait for puppet host configuration to finish | 31.45s
2023-08-02 17:25:17.681440 | 0050569a-2556-1e60-adab-00000000415d | SUMMARY | tplosp-cntl-s01 | Wait for containers to start for step 1 using paunch | 22.16s
2023-08-02 17:25:17.681806 | 0050569a-2556-1e60-adab-000000003d4d | SUMMARY | tplosp-cmpt-s01 | Wait for puppet host configuration to finish | 21.66s
2023-08-02 17:25:17.682185 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-08-02 17:25:17.682620 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ State Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-08-02 17:25:17.682992 | ~~~~~~~~~~~~~~~~~~ Number of nodes which did not deploy successfully: 1 ~~~~~~~~~~~~~~~~~
2023-08-02 17:25:17.683412 | The following node(s) had failures: tplosp-cntl-s01
2023-08-02 17:25:17.683754 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ansible failed, check log at /var/lib/mistral/overcloud/ansible.log.Overcloud Endpoint: http://10.0.120.97:5000
Overcloud Horizon Dashboard URL: http://10.0.120.97:80/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed with error

tags: added: tripleo-common
Revision history for this message
umamaheswar Reddy (chinna91826) wrote :

I have deployed undercloud without errors with container namespace "quay.io/tripleotraincentos8" and using stable version train with centos stream8 OS.
Now, seeing issue on overcloud deployment and it's failing at one of the controller with error as " Failed running container for rabbit_init_tasks". Could you please suggest a solution on this.
Only on one controller this rabbitmq container is not running and other controllers it's working fine. We have changed the server and tried then also failing at the same controller.

Revision history for this message
Brendan Shephard (bshephar) wrote :
Revision history for this message
umamaheswar Reddy (chinna91826) wrote :

Hi Brendan,

Thanks for the response.

pcs status is not working.

Please find the attached rabbitmq logs.

Revision history for this message
Brendan Shephard (bshephar) wrote :

It looks like this is the rabbitmq logs from your director node?

We need to be debugging the failing controller node.

Can you run pcs status on one of the controllers, and collect the rabbitmq logs from the failing controller:
tplosp-cntl-s0

Revision history for this message
umamaheswar Reddy (chinna91826) wrote :

Hi Brendan,

pcs status command not yet working on controller. In /var/log/containers/rabbitmq on failed controller, I didn't find the rabbitmq logs but found erl-crash.dump. please find the attached dump file, if it help's out.

Revision history for this message
Brendan Shephard (bshephar) wrote :

Pacemaker would need to be running for rabbitmq to work. So we'll need to debug pacemaker first, what's the error you get when you run `pcs status`?

Note that you need to run the command as root. `sudo pcs status`

Revision history for this message
Syed (syed-ali-m) wrote :

Hi Brendan, here's the output.
[stack@tplosp-cntl-s01 ~]$ sudo pcs status
sudo: pcs: command not found

Revision history for this message
Brendan Shephard (bshephar) wrote :

That's odd, did you use Ironic to provision the Overcloud nodes? Which image did you use to provision them, was it these ones?
https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/

Revision history for this message
Syed (syed-ali-m) wrote :

Thanks Brendan for your prompt response. No, we are instead using pre-provisioned overcloud nodes.

https://trunk.rdoproject.org/centos8/component/tripleo/current/python3-tripleo-repos-0.1.1-0.20220620133357.8321b3f.el8.noarch.rpm

export OS_YAML="/usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos8.yaml"
export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean*"
export DIB_YUM_REPO_CONF="$DIB_YUM_REPO_CONF /etc/yum.repos.d/tripleo-centos-ceph*.repo"
export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean* /etc/yum.repos.d/tripleo-centos-*"
export STABLE_RELEASE="train"
export DIB_LOCAL_IMAGE=CentOS-Stream-GenericCloud-8-20230710.0.x86_64.qcow2
openstack overcloud image build --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-images-python3.yaml --config-file $OS_YAML
openstack overcloud image upload

Revision history for this message
Luca Miccini (lmiccini2) wrote :

Hi, we just tried to replicate the issue in our env and puppet should be able to install pacemaker if not already present, you should see something like this in the logs (you may have to set puppet debug to true to see the same logs):

Debug: Package[pacemaker](provider=dnf): Ensuring => present
Debug: Executing: '/bin/dnf -d 0 -e 1 -y install pacemaker'
Error: Execution of '/bin/dnf -d 0 -e 1 -y install pacemaker' returned 1: Error: There are no enabled repositories in "/etc/yum.repos.d", "/etc/yum/repos.d", "/etc/distro.repos.d".
Error: /Stage[main]/Pacemaker::Install/Package[pacemaker]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/dnf -d 0 -e 1 -y install pacemaker' returned 1: Error: There are no enabled repositories in "/etc/yum.repos.d","/etc/yum/repos.d", "/etc/distro.repos.d".

in our case we didn't enable any repo but you get the gist of it.

would be good to look at the env sosreports taken with --all-logs, can you upload them somewhere?

Revision history for this message
umamaheswar Reddy (chinna91826) wrote :

Hi Luca,

The sosreport logs are too huge nearly 750MB size, Let us know how should can we upload here?

Revision history for this message
umamaheswar Reddy (chinna91826) wrote :

Hi Luca,

I have added few log files from the controller and director. Please check once if it is useful or else let me know what are the files you are looking in sosreport, so that we can try to share one by one files.

Revision history for this message
umamaheswar Reddy (chinna91826) wrote :
Download full text (4.4 KiB)

Hi, Tried to install pacemaker using puppet, please find the below debug output,
[stack@tplosp-cntl-s01 ~]$ sudo puppet module install pacemaker --debug
Debug: Runtime environment: puppet_version=5.5.10, ruby_version=2.5.9, run_mode=user, default_encoding=UTF-8
Notice: Preparing to install into /etc/puppet/modules ...
Warning: fdio (/etc/puppet/modules/fdio) has an invalid version number (18.01). The version has been set to 0.0.0. If you are the maintainer for this module, please update the metadata.json with a valid Semantic Version (http://semver.org).
Debug: Facter: searching for custom fact "fips_enabled".
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/apache/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/archive/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/cassandra/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/collectd/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/firewall/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/git/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/haproxy/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/ipaclient/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/java/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/mongodb/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/mysql/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/nova/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/openstacklib/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/pacemaker/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/rabbitmq/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/redis/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/ssh/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/staging/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/stdlib/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/systemd/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/tripleo/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/vcsrepo/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /usr/share/openstack-puppet/modules/vswitch/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /var/lib/puppet/lib/facter.
Debug: Facter: searching for fips_enabled.rb in /var/lib/puppet/facts.
Debug: Facter: fact "facterversion" has resolved ...

Read more...

Revision history for this message
Takashi Kajinami (kajinamit) wrote :

In #9 you said you are using pre-preovisioned node but these commands are oftern used to provision nodes during deployment. Are you sure you are really using pre-provisioned node method and installs OS to the baremetal nodes before you execute overcloud deploy command ?

The attached sos-logs contain only log files of sosreport command itself and does not include any useful information.

Also, the command you explained in #13 is wrong. That command installs puppet-pacemaker instead of pacemaker package. You have to instal lthe paceamker PACKAGE.
 $ sudo dnf install -y pacemaker

I'd probably suggest you check puppet logs in journal/messages to find out why puppet is not installing the package properly.

Revision history for this message
umamaheswar Reddy (chinna91826) wrote (last edit ):

Hi Luca,

Yes, we are using preprovisioned method(Director as VM, Controller and Compute are Physical nodes). Please find the failed controller journal logs.

Please have a look.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.