ipa job mirror issue tracker for master and wallaby
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
tripleo |
Fix Released
|
High
|
Unassigned |
Bug Description
FS064, FS039 and standalone-ipa jobs fails with following error during ipa preparation on master and wallaby branch.
```
2022-07-20 18:52:03 | + sudo dnf install -yq ansible-core ansible-tripleo-ipa
2022-07-20 18:52:06 | Error: Error downloading packages:
2022-07-20 18:52:06 | ansible-
2022-07-20 18:52:06 |
```
or
```
2022-07-20 18:48:28 | + sudo dnf install -yq ansible-core ansible-tripleo-ipa
2022-07-20 18:48:31 | Error: Error downloading packages:
2022-07-20 18:48:31 | ansible-
2022-07-20 18:48:31 |
```
We have seen either of these abive two error at the same task.
```
TASK [freeipa-setup : Run the tripleo-ipa preparation script] ******************
2022-07-20 18:52:01.985847 | primary | Wednesday 20 July 2022 18:52:01 -0400 (0:00:02.803) 0:19:21.086 ********
2022-07-20 18:52:06.616031 | primary | fatal: [supplemental]: FAILED! => {"changed": true, "cmd": "set -o pipefail && ~cloud-
```
Note: It is a tracker bug for the same.
summary: |
- ipa job mirror issue tracker in integration line + ipa job mirror issue tracker for master and wallaby |
My understanding of what's happening:
1. IPA deployment is working fine, packages installed and ipa configured:
https:/ /logserver. rdoproject. org/openstack- periodic- integration- stable1/ opendev. org/openstack/ tripleo- ci/master/ periodic- tripleo- ci-centos- 9-ovb-3ctlr_ 1comp_1supp- featureset039- wallaby/ 481c0be/ logs/supplement al/home/ cloud-user/ deploy_ freeipa. log.txt. gz
2. In the next package installation attempt (part of the tripleo-ipa preparation script, it fails to reach the package mirrors:
https:/ /logserver. rdoproject. org/openstack- periodic- integration- stable1/ opendev. org/openstack/ tripleo- ci/master/ periodic- tripleo- ci-centos- 9-ovb-3ctlr_ 1comp_1supp- featureset039- wallaby/ 481c0be/ logs/supplement al/home/ cloud-user/ ipa_prep. sh.log. txt.gz
I'd say step 1 is the root cause, i guess it's changing dns configuration or something like that. Would it be possible to hold a node for that job to inspect the status of the server after the failure?