ironic errors out during overcloud-deploy

Bug #1749707 reported by Matthias Runge
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ironic
Invalid
Undecided
Unassigned
tripleo
Fix Released
Critical
Quique Llorente

Bug Description

This has been seen on a ovb deployment

Overcloud-prep-images runs forever:
TASK [overcloud-prep-images : Create overcloud prep-images script] ************************************************************************************************************************************************
task path: /var/tmp/tripleo_local/usr/local/share/ansible/roles/overcloud-prep-images/tasks/create-scripts.yml:3
Thursday 15 February 2018 12:50:36 +0100 (0:00:00.137) 1:08:59.809 *****
changed: [undercloud] => {"changed": true, "checksum": "dc3c4046c781a587c2f44612ce51b5ec3868c454", "dest": "/home/stack/overcloud-prep-images.sh", "failed": false, "gid": 1001, "group": "stack", "md5sum": "df5b1b0edf886e830819ec8b6b7e833f", "mode": "0755", "owner": "stack", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 643, "src": "/home/stack/.ansible/tmp/ansible-tmp-1518695436.2-236182804244208/source", "state": "file", "uid": 1001}

TASK [overcloud-prep-images : Prepare the overcloud images for deploy] ********************************************************************************************************************************************
task path: /var/tmp/tripleo_local/usr/local/share/ansible/roles/overcloud-prep-images/tasks/overcloud-prep-images.yml:1
Thursday 15 February 2018 12:50:39 +0100 (0:00:03.191) 1:09:03.001 *****

At the same time, overcloud_prep_images.log contains:

2018-02-15 11:52:44 | Started Mistral Workflow tripleo.baremetal.v1.register_or_update. Execution ID: 43b583d1-aa2f-4184-b81b-25936c33ea8d
2018-02-15 11:52:44 |
2018-02-15 11:52:44 |
2018-02-15 11:52:44 | Nodes set to managed.
2018-02-15 11:52:44 | Successfully registered node UUID 506b96b4-e736-4c02-8229-e0d5efbcb0ee
2018-02-15 11:52:44 | Successfully registered node UUID d5f346a7-af70-40b7-9d7b-9a4390383594
2018-02-15 11:52:44 | + openstack overcloud node introspect --all-manageable
2018-02-15 11:52:47 | Waiting for messages on queue 'tripleo' with no timeout.
2018-02-15 12:17:07 | Waiting for introspection to finish...
2018-02-15 12:17:07 | Started Mistral Workflow tripleo.baremetal.v1.introspect_manageable_nodes. Execution ID: 1c12d0e7-18ed-4878-b0c1-6620d01481d0
2018-02-15 12:17:07 | Introspection of node d5f346a7-af70-40b7-9d7b-9a4390383594 timed out.
2018-02-15 12:17:07 | Introspection of node 506b96b4-e736-4c02-8229-e0d5efbcb0ee timed out.
2018-02-15 12:17:07 | Retrying 2 nodes that failed introspection. Attempt 2 of 3
2018-02-15 12:17:07 | Introspection of node 506b96b4-e736-4c02-8229-e0d5efbcb0ee completed. Status:SUCCESS. Errors:None
2018-02-15 12:17:07 | Introspection of node d5f346a7-af70-40b7-9d7b-9a4390383594 completed. Status:SUCCESS. Errors:None
2018-02-15 12:17:07 | Successfully introspected nodes.
2018-02-15 12:17:07 | Nodes introspected successfully.
2018-02-15 12:17:07 | Introspection completed.
2018-02-15 12:17:07 | + openstack overcloud node provide --all-manageable
2018-02-15 12:17:10 | Waiting for messages on queue 'tripleo' with no timeout.

In ironic app.log, I found:

2018-02-15 12:17:12.873 29033 DEBUG wsme.api [req-2d3878c7-04e7-4c03-ab0a-58265156b970 d084dc55b42346e6a2b036a088e2be7d 410a77c57a134350ab430e7886ac3256 - default default] Client-side error: Node d5f346a7-af70-4
0b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 1385, in do_provisioning_action
    % action) as task:

  File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 168, in acquire
    driver_name=driver_name, purpose=purpose)

  File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 244, in __init__
    self.release_resources()

  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 227, in __init__
    self._lock()

  File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 276, in _lock
    reserve_node()

  File "/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f
    return Retrying(*dargs, **dkw).call(f, *args, **kw)

  File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
    raise attempt.get()

  File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
    six.reraise(self.value[0], self.value[1], self.value[2])

  File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)

  File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 269, in reserve_node
    self.node_id)

  File "/usr/lib/python2.7/site-packages/ironic/objects/node.py", line 290, in reserve
    db_node = cls.dbapi.reserve_node(tag, node_id)
 File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
    ectxt.value = e.inner_exc

  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 135, in wrapper
    return f(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/ironic/db/sqlalchemy/api.py", line 298, in reserve_node
    host=node['reservation'])

NodeLocked: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed.
 format_exception /usr/lib/python2.7/site-packages/wsme/api.py:222

Apparently, this results in mistral errors

{"error_message": "{\"debuginfo\":null,\"faultcode\":\"Client\",\"faultstring\":\"Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is co
mpleted.\"}"}
 log_http_response /usr/lib/python2.7/site-packages/ironicclient/common/http.py:292
2018-02-15 12:17:33.584 31770 ERROR ironicclient.common.http [req-df7226ad-d563-4dcd-9e81-dc2a9b65b562 d084dc55b42346e6a2b036a088e2be7d 410a77c57a134350ab430e7886ac3256 - default default] Error contacting Ironic
 server: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409). Attempt 6 of 6: Conflict: Node d5f346a7-af70-40b7-9d
7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409)
2018-02-15 12:17:33.589 31770 WARNING mistral.actions.openstack.base [req-df7226ad-d563-4dcd-9e81-dc2a9b65b562 d084dc55b42346e6a2b036a088e2be7d 410a77c57a134350ab430e7886ac3256 - default default] Traceback (most
 recent call last):
  File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 117, in run
    result = method(**self._kwargs_for_run)
  File "/usr/lib/python2.7/site-packages/ironicclient/v1/node.py", line 513, in set_provision_state
    return self.update(path, body, http_method='PUT')
  File "/usr/lib/python2.7/site-packages/ironicclient/v1/node.py", line 336, in update
    method=http_method)
  File "/usr/lib/python2.7/site-packages/ironicclient/common/base.py", line 191, in _update
    resp, body = self.api.json_request(method, url, body=patch)
  File "/usr/lib/python2.7/site-packages/ironicclient/common/http.py", line 395, in json_request
    resp, body_iter = self._http_request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/ironicclient/common/http.py", line 188, in wrapper
    return func(self, url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/ironicclient/common/http.py", line 376, in _http_request
    error_json.get('debuginfo'), method, url)
Conflict: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409)
: Conflict: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409)
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor [req-df7226ad-d563-4dcd-9e81-dc2a9b65b562 d084dc55b42346e6a2b036a088e2be7d 410a77c57a134350ab430e7886ac3256 - default default] Failed to run action [action_ex_id=d8df9bd9-c156-4dfe-a2a2-0e450dc31234, action_cls='<class 'mistral.actions.action_factory.IronicAction'>', attributes='{u'client_method_name': u'node.set_provision_state'}', params='{u'state': u'provide', u'node_uuid': u'd5f346a7-af70-40b7-9d7b-9a4390383594'}']
 IronicAction.node.set_provision_state failed: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409): ActionException: IronicAction.node.set_provision_state failed: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409)
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor Traceback (most recent call last):
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor File "/usr/lib/python2.7/site-packages/mistral/executors/default_executor.py", line 110, in run_action
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor result = action.run(action_ctx)
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 130, in run
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor (self.__class__.__name__, self.client_method_name, str(e))
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor ActionException: IronicAction.node.set_provision_state failed: Node d5f346a7-af70-40b7-9d7b-9a4390383594 is locked by host localhost.localdomain, please retry after the current operation is completed. (HTTP 409)
2018-02-15 12:17:33.589 31770 ERROR mistral.executors.default_executor

Tags: alert
Matthias Runge (mrunge)
Changed in tripleo:
status: New → Triaged
milestone: none → queens-rc1
importance: Undecided → High
tags: added: quickstart
Changed in tripleo:
milestone: queens-rc1 → rocky-1
Revision history for this message
Matthias Runge (mrunge) wrote :
Download full text (6.7 KiB)

in ironic-conductor log, I found:
2018-03-07 10:12:42.116 2922 DEBUG ironic.common.utils [-] Command stdout is: "Chassis Power is on
" execute /usr/lib/python2.7/site-packages/ironic/common/utils.py:76
2018-03-07 10:12:42.116 2922 DEBUG ironic.common.utils [-] Command stderr is: "" execute /usr/lib/python2.7/site-packages/ironic/common/utils.py:77
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall [-] Dynamic backoff interval looping call 'ironic.conductor.utils._wait' failed: LoopingCallTimeOut: Looping call timed out after 19.72 seconds
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall Traceback (most recent call last):
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 141, in _run_loop
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall idle = idle_for_func(result, watch.elapsed())
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 338, in _idle_for
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall % self._error_time)
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall LoopingCallTimeOut: Looping call timed out after 19.72 seconds
2018-03-07 10:12:42.117 2922 ERROR oslo.service.loopingcall
2018-03-07 10:12:42.117 2922 ERROR ironic.conductor.utils [req-6184f919-5874-4f2c-87a8-0d9b5fde2a1f 652b46cbc4544957bf2909991a20b29b f5025b207a33405a96d9f570742c8a11 - default default] Timed out after 30 secs waiting for power power off on node 128d8c6c-928a-4100-8c44-e59ec6cc306e.: LoopingCallTimeOut: Looping call timed out after 19.72 seconds
2018-03-07 10:12:42.149 2922 DEBUG ironic.conductor.task_manager [req-6184f919-5874-4f2c-87a8-0d9b5fde2a1f 652b46cbc4544957bf2909991a20b29b f5025b207a33405a96d9f570742c8a11 - default default] Successfully released exclusive lock for changing node power state on node 128d8c6c-928a-4100-8c44-e59ec6cc306e (lock was held 39.03 sec) release_resources /usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py:352
2018-03-07 10:12:42.273 2922 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): ipmitool -I lanplus -H 10.0.1.12 -L ADMINISTRATOR -U admin -R 12 -N 5 -f /tmp/tmpzfNatr power status execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2018-03-07 10:12:42.393 2922 DEBUG oslo_concurrency.processutils [-] CMD "ipmitool -I lanplus -H 10.0.1.12 -L ADMINISTRATOR -U admin -R 12 -N 5 -f /tmp/tmpzfNatr power status" returned: 0 in 0.120s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2018-03-07 10:12:42.394 2922 DEBUG ironic.common.utils [-] Execution completed, command line is "ipmitool -I lanplus -H 10.0.1.12 -L ADMINISTRATOR -U admin -R 12 -N 5 -f /tmp/tmpzfNatr power status" execute /usr/lib/python2.7/site-packages/ironic/common/utils.py:75
2018-03-07 10:12:42.394 2922 DEBUG ironic.common.utils [-] Command stdout is: "Chassis Power is on
" execute /usr/lib/python2.7/site-packages/ironic/common/utils.py:76
2018-03-07 10:12:42.395 2922 DEBUG ironic.common.utils [-] Command stderr is: "" execute /usr/lib/python2.7/site-pa...

Read more...

Revision history for this message
Matthias Runge (mrunge) wrote :

I can not find any file/directory with the name pysnmp_mibs

Revision history for this message
Matthias Runge (mrunge) wrote :

[root@localhost ironic]# rpm -qa | grep snmp
net-snmp-agent-libs-5.7.2-28.el7_4.1.x86_64
net-snmp-libs-5.7.2-28.el7_4.1.x86_64
python2-pysnmp-4.3.2-3.el7.noarch
erlang-snmp-19.3.6.4-1.el7.x86_64
puppet-snmp-3.9.1-0.20180107181719.5d73485.el7.centos.noarch

Matthias Runge (mrunge)
summary: - overcloud deployment fails
+ ironic errors out during overcloud-deploy
Revision history for this message
Matthias Runge (mrunge) wrote :

adding more content here: 2018-03-22 08:32:10.166 26096 WARNING mistral.actions.openstack.base [req-a429f94d-ef74-4cd5-9cdb-98bd1dd9eae9 1e8a5eab880b4bcc93168da6f7799248 7360d58d28c5407785ca8942de0c2303 - default default] Traceback (most
 recent call last):
  File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 117, in run
    result = method(**self._kwargs_for_run)
  File "/usr/lib/python2.7/site-packages/ironic_inspector_client/v1.py", line 225, in wait_for_finish
    "of nodes %s") % new_active_uuids)
WaitTimeoutError: Timeout while waiting for introspection of nodes [u'0eac254f-7fa4-45e8-a82e-5d8edbf37f47']
: WaitTimeoutError: Timeout while waiting for introspection of nodes [u'0eac254f-7fa4-45e8-a82e-5d8edbf37f47']
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor [req-a429f94d-ef74-4cd5-9cdb-98bd1dd9eae9 1e8a5eab880b4bcc93168da6f7799248 7360d58d28c5407785ca8942de0c2303 - default default] Failed to run
 action [action_ex_id=69f21c2c-c950-4663-a599-34840c39cd17, action_cls='<class 'mistral.actions.action_factory.BaremetalIntrospectionAction'>', attributes='{u'client_method_name': u'wait_for_finish'}', params='{
u'max_retries': 120, u'retry_interval': 10, u'uuids': [u'0eac254f-7fa4-45e8-a82e-5d8edbf37f47']}']
 BaremetalIntrospectionAction.wait_for_finish failed: Timeout while waiting for introspection of nodes [u'0eac254f-7fa4-45e8-a82e-5d8edbf37f47']: ActionException: BaremetalIntrospectionAction.wait_for_finish fai
led: Timeout while waiting for introspection of nodes [u'0eac254f-7fa4-45e8-a82e-5d8edbf37f47']
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor Traceback (most recent call last):
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor File "/usr/lib/python2.7/site-packages/mistral/executors/default_executor.py", line 114, in run_action
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor result = action.run(action_ctx)
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 130, in run
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor (self.__class__.__name__, self.client_method_name, str(e))
2018-03-22 08:32:10.167 26096 ERROR mistral.executors.default_executor ActionException: BaremetalIntrospectionAction.wait_for_finish failed: Timeout while waiting for introspection of nodes [u'0eac254f-7fa4-45e8
-a82e-5d8edbf37f47']

(again, points to baremetal introspection)

Revision history for this message
Matthias Runge (mrunge) wrote :

Even when this was reported on a local deployment, it can be easily reproduced by using devmode on RDO cloud

Changed in tripleo:
milestone: rocky-1 → rocky-2
Changed in tripleo:
milestone: rocky-2 → rocky-3
Changed in tripleo:
assignee: nobody → Quique Llorente (quiquell)
tags: added: alert promotion-blocker
Revision history for this message
Quique Llorente (quiquell) wrote :
Changed in tripleo:
importance: High → Critical
Revision history for this message
Quique Llorente (quiquell) wrote :

https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-master-upload/db23aa3/undercloud/var/log/extra/errors.txt.gz#_2018-06-28_00_08_10_998

Driver, hardware type or interface pxe_ipmitool could not be loaded. Reason: [Errno 13] Permission denied: '/var/lib/ironic/httpboot/boot.ipxe'.

Revision history for this message
Quique Llorente (quiquell) wrote :

There is also a circular dependency detected there

https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-master-upload/db23aa3/undercloud/var/log/extra/errors.txt.gz#_2018-06-28_01_08_56_322
ERROR /var/log/containers/keystone/keystone.log: 359 ERROR keystone.assignment.core [req-2fe8db01-3884-4ba5-be27-b764db12feb6 - - - - -] Circular reference found role inference rules - 6289d150aae04a4a83cd0b8d79860146.

Revision history for this message
Bogdan Dobrelya (bogdando) wrote :

that Errno 13 should be a red herring, see the elastic recheck stats for 'hardware type or interface pxe_ipmitool could not be loaded'

https://pastebin.com/9q3pr8yP

Revision history for this message
Quique Llorente (quiquell) wrote :
Revision history for this message
Dougal Matthews (d0ugal) wrote :

I opened a related bug (and submitted a patch) for the unhandled error loop. https://bugs.launchpad.net/tripleo/+bug/1779097

Revision history for this message
wes hayutin (weshayutin) wrote :

I need more information about the deployment than what I am seeing in this bug.
fs002 is working now that the correct config is passed

https://review.rdoproject.org/jenkins/job/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-master-upload/339/

Matthias, can you paste in how you are invoking tq please

tags: removed: promotion-blocker quickstart
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tripleo-quickstart (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/579051

Revision history for this message
Matthias Runge (mrunge) wrote :

Wes, for my deployments, I'm using this tiny script:

source ~/tripleo_v2_openrc.sh
rm -rf /var/tmp/tripleo_local
bash devmode.sh --no-gate --ovb -d -w /var/tmp/tripleo_local

I'm suspicious that this is a timing issue somewhere, but I couldn't reproduce this the last time I tried.

Revision history for this message
Matthias Runge (mrunge) wrote :

I'm not sure, if I see a reincarnation of this bug here:

https://bugs.launchpad.net/tripleo/+bug/1779295
(vm states get out of sync, the vm is off, undercloud thinks it's in building state).

Revision history for this message
Bob Fournier (bfournie) wrote :

Regarding the pysnmp_mibs warnings (Comment 2), they are harmless and are fixed with this patch that recently landed - https://review.openstack.org/#/c/570400/.

Revision history for this message
Bob Fournier (bfournie) wrote :

There doesn't seem to an introspection issue identified here except that the node is taking an excessive amount of time to power off.

2018-03-07 10:12:42.396 2922 ERROR ironic.conductor.utils [req-c0bfe2be-b0ee-4002-86c9-eb0593e0d5a2 652b46cbc4544957bf2909991a20b29b f5025b207a33405a96d9f570742c8a11 - default default] Timed out after 30 secs waiting for power power off on node a8389af3-5259-4ecf-9487-8dd8723280c2

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tripleo-quickstart (master)

Reviewed: https://review.openstack.org/579051
Committed: https://git.openstack.org/cgit/openstack/tripleo-quickstart/commit/?id=fb019907124240a2dbbd297567b54412ec6463c8
Submitter: Zuul
Branch: master

commit fb019907124240a2dbbd297567b54412ec6463c8
Author: yatin <email address hidden>
Date: Fri Jun 29 09:44:24 2018 +0530

    Switch to containerized undercloud net config

    fs020 is switched to containerized undercloud in [1],
    it's failing since then. Switch to undercloud net config
    to fix this. fs002 is fixed in [2].

    Also change it in other containerized undercloud featuresets.

    [1] https://review.openstack.org/#/c/572215/
    [2] https://review.openstack.org/#/c/576987/

    Change-Id: I992a49ef025065f4dc02c650105832e85d9eb8b8
    Related-Bug: #1749707

wes hayutin (weshayutin)
Changed in tripleo:
status: Triaged → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tripleo-quickstart (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/594111

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tripleo-quickstart (master)

Reviewed: https://review.openstack.org/594111
Committed: https://git.openstack.org/cgit/openstack/tripleo-quickstart/commit/?id=7cc62968410f5b55d7a95707c71a641c72036a08
Submitter: Zuul
Branch: master

commit 7cc62968410f5b55d7a95707c71a641c72036a08
Author: yatin <email address hidden>
Date: Tue Aug 21 15:12:35 2018 +0530

    [FS021] Switch to containerized undercloud net config

    fs021 is switched to containerized undercloud in [1],
    only fs021 is the ovb job that has missing this net
    config, this patch fixes it.

    [1] https://review.openstack.org/#/c/583202

    Related-Bug: #1749707
    Change-Id: I773041057bd5bc19fc280ef7ccb445bbccfa30a3

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.openstack.org/580119
Committed: https://git.openstack.org/cgit/openstack/tripleo-quickstart/commit/?id=d64a2585daea72c248e1c07f5c76b4c8c51e2799
Submitter: Zuul
Branch: master

commit d64a2585daea72c248e1c07f5c76b4c8c51e2799
Author: Carlos Goncalves <email address hidden>
Date: Wed Jul 4 11:46:37 2018 +0200

    FS38: switch to containerized UC and config-download

    This patch switches FS038 to containerized undercloud and converts the
    job that runs against master changes to use config-download.
    Additionally, switch to containerized undercloud net config.

    Related-Bug: #1749707

    Change-Id: Iff3b8b66deb901993f9e8fe0f2a896f1da2749f5

Changed in ironic:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.