Managing a subcloud causes it to go offline temporarily

Bug #1860372 reported by Bart Wensley
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
Medium
Bart Wensley

Bug Description

Brief Description
-----------------
When a subcloud is managed for the first time, the availability changes to offline temporarily after two failed audits (in roughly 3-6 minutes). The subcloud will eventually go back online - this can take anywhere from about 3 minutes up to about an hour. Once the subcloud goes back online, subsequent unmanage/manage operations do not seem to exhibit the same issue.

Severity
--------
Major: System/Feature is usable but degraded

Steps to Reproduce
------------------
Install a subcloud and set it to managed.

Expected Behavior
------------------
The availability of a subcloud should never change to offline as a result of managing the subcloud.

Actual Behavior
----------------
The availability of the subcloud changes to offline.

Reproducibility
---------------
Reproducible

System Configuration
--------------------
Distributed Cloud deployment with AIO-DX System Controller and AIO-SX subclouds. Probably will be seen in any Distributed Cloud deployment.

Branch/Pull Time/Commit
-----------------------
Designer load created from master - pull done on January 16, 2020.

Last Pass
---------
Unknown

Timestamp/Logs
--------------

System Controller logs:

2020-01-20 19:26:51.907 108585 INFO dcmanager.manager.service [req-b1699cc5-0a16-4f37-9b6f-ad6b81b46670 050b05066e12420db0da86f162642a7d - - default default] Handling update_subcloud request for: 1
2020-01-20 19:26:51.908 108585 INFO dcmanager.manager.subcloud_manager [req-b1699cc5-0a16-4f37-9b6f-ad6b81b46670 050b05066e12420db0da86f162642a7d - - default default] Updating subcloud 1.
2020-01-20 19:27:01.292 108585 INFO dcmanager.manager.service [req-17806649-c363-460d-be0b-780f06e7f000 - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:27:01.340 108585 INFO dcmanager.manager.subcloud_manager [req-17806649-c363-460d-be0b-780f06e7f000 - - - - -] Updating subcloud:subcloud1 endpoint:platform sync:in-sync
2020-01-20 19:27:01.759 108585 INFO dcmanager.manager.subcloud_manager [req-b1699cc5-0a16-4f37-9b6f-ad6b81b46670 050b05066e12420db0da86f162642a7d - - default default] Notifying dcorch, subcloud:subcloud1 management: managed, availability:online
2020-01-20 19:27:01.766 112923 INFO eventlet.wsgi.server [req-b1699cc5-0a16-4f37-9b6f-ad6b81b46670 050b05066e12420db0da86f162642a7d - - default default] 192.168.204.3 "PATCH /v1.0/subclouds/subcloud1 HTTP/1.1" status: 200 len: 730 time: 10.4605880
2020-01-20 19:27:01.777 108585 INFO dcmanager.manager.service [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:27:01.860 108585 INFO dcmanager.manager.subcloud_manager [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Updating subcloud:subcloud1 endpoint:identity sync:in-sync
2020-01-20 19:27:02.627 108585 INFO dcmanager.manager.patch_audit_manager [-] Triggered patch audit.
2020-01-20 19:27:03.529 108585 INFO dcmanager.manager.service [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:27:03.584 108585 INFO dcmanager.manager.subcloud_manager [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Updating subcloud:subcloud1 endpoint:identity sync:out-of-sync
2020-01-20 19:27:03.753 108585 INFO dcmanager.manager.service [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:27:04.081 108585 INFO dcmanager.manager.service [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:27:04.175 108585 INFO dcmanager.manager.service [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:27:04.214 108585 INFO dcmanager.manager.subcloud_manager [req-ace0542e-e8c5-4091-bc78-6b7289b534cd - - - - -] Updating subcloud:subcloud1 endpoint:identity sync:in-sync
2020-01-20 19:27:05.735 108585 ERROR dcmanager.drivers.openstack.patching_v1 [-] query failed with RC: 401
2020-01-20 19:27:05.735 108585 WARNING dcmanager.manager.patch_audit_manager [-] Cannot retrieve patches for subcloud: subcloud1: Exception: query failed with RC: 401
2020-01-20 19:29:11.769 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Triggered subcloud audit.
2020-01-20 19:29:12.530 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Waiting for subcloud audits to complete.
2020-01-20 19:29:14.453 108585 WARNING cgtsclient.common.http [-] Request returned failure status.
2020-01-20 19:29:14.453 108585 WARNING dcmanager.manager.subcloud_audit_manager [-] Cannot retrieve service groups for subcloud: subcloud1, None: HTTPServiceUnavailable: None
2020-01-20 19:29:15.324 108585 INFO dcmanager.manager.subcloud_audit_manager [-] All subcloud audits have completed.
2020-01-20 19:32:02.677 108585 INFO dcmanager.manager.patch_audit_manager [-] Triggered patch audit.
2020-01-20 19:32:05.521 108585 INFO dcmanager.manager.subcloud_manager [-] Updating subcloud:subcloud1 endpoint:patching sync:in-sync
2020-01-20 19:32:11.766 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Triggered subcloud audit.
2020-01-20 19:32:12.520 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Waiting for subcloud audits to complete.
2020-01-20 19:32:15.495 108585 WARNING cgtsclient.common.http [-] Request returned failure status.
2020-01-20 19:32:15.496 108585 WARNING dcmanager.manager.subcloud_audit_manager [-] Cannot retrieve service groups for subcloud: subcloud1, Bad response from SM API: HTTPBadRequest: Bad response from SM API
2020-01-20 19:32:15.497 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Setting new availability status: offline on subcloud: subcloud1
2020-01-20 19:32:15.633 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Notifying dcorch, subcloud:subcloud1 management: managed, availability:offline
2020-01-20 19:32:15.662 108585 INFO dcmanager.manager.subcloud_manager [-] Updating all subclouds, endpoint: None sync: unknown
2020-01-20 19:32:16.053 108585 INFO dcmanager.manager.subcloud_audit_manager [-] All subcloud audits have completed.
2020-01-20 19:35:11.765 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Triggered subcloud audit.
2020-01-20 19:35:12.630 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Waiting for subcloud audits to complete.
2020-01-20 19:35:15.261 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Setting new availability status: online on subcloud: subcloud1
2020-01-20 19:35:29.399 108585 INFO dcmanager.manager.service [req-17806649-c363-460d-be0b-780f06e7f000 - - - - -] Handling update_subcloud_endpoint_status request for: subcloud1
2020-01-20 19:35:29.569 108585 INFO dcmanager.manager.subcloud_manager [req-17806649-c363-460d-be0b-780f06e7f000 - - - - -] Updating subcloud:subcloud1 endpoint:platform sync:in-sync
2020-01-20 19:35:30.024 108585 INFO dcmanager.manager.subcloud_audit_manager [-] Notifying dcorch, subcloud:subcloud1 management: managed, availability:online
2020-01-20 19:35:30.028 108585 INFO dcmanager.manager.subcloud_audit_manager [-] All subcloud audits have completed.

Subcloud logs:

sysinv 2020-01-20 19:26:16.240 99932 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'sm_sda': [{u'status': u'', u'name': u'controller', u'service_group_name': u'distributed-cloud-services', u'uuid': u'b2859a70-1429-4b59-b83f-c166f89478e7', u'node_name': u'controller-0', u'state': u'active', u'id': 1, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'storage-monitoring-services', u'uuid': u'49764521-9587-4c9e-b7ac-3a80fe8f7355', u'node_name': u'controller-0', u'state': u'active', u'id': 2, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'storage-services', u'uuid': u'17daaeae-d511-4e5a-8bd6-84731975398d', u'node_name': u'controller-0', u'state': u'active', u'id': 3, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'web-services', u'uuid': u'828f1d87-2c32-452b-a337-6962a0f36516', u'node_name': u'controller-0', u'state': u'active', u'id': 4, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'directory-services', u'uuid': u'0409fb5d-e309-4f0a-a9e4-20453a7c0e01', u'node_name': u'controller-0', u'state': u'active', u'id': 5, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'patching-services', u'uuid': u'2a32affc-f7df-4adb-8976-dc0a95cf1b24', u'node_name': u'controller-0', u'state': u'active', u'id': 6, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'vim-services', u'uuid': u'29bd5722-67dc-4bac-a062-2d4cadd77192', u'node_name': u'controller-0', u'state': u'active', u'id': 7, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'cloud-services', u'uuid': u'948a547b-6b9c-4ac4-80fb-9460e8330e7b', u'node_name': u'controller-0', u'state': u'active', u'id': 8, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'controller-services', u'uuid': u'276d37be-6205-443b-8aa7-bd22d9c3e9ef', u'node_name': u'controller-0', u'state': u'active', u'id': 9, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'oam-services', u'uuid': u'77858045-8234-463b-98e5-c5cb24e851c4', u'node_name': u'controller-0', u'state': u'active', u'id': 10, u'condition': u'', u'desired_state': u'active'}]}
sysinv 2020-01-20 19:26:59.911 99932 WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken: Token authorization failed
sysinv 2020-01-20 19:27:00.152 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-8de396ec-3e05-4ec6-9d24-951c71510626)
sysinv 2020-01-20 19:27:00.214 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-cf67219b-d013-4bea-aaa6-71bc53038506)
sysinv 2020-01-20 19:27:00.215 99932 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data
sysinv 2020-01-20 19:27:00.308 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-34877d40-dccb-49db-b01b-1c6a916f3860)
sysinv 2020-01-20 19:27:00.356 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-f5872f28-047e-475d-a69c-a571f3a5874b)
sysinv 2020-01-20 19:27:00.357 99932 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data
sysinv 2020-01-20 19:27:00.432 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-38dc3b96-b822-4109-8b5c-77b012de014c)
sysinv 2020-01-20 19:27:00.465 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-cd45c7ad-da1d-4b42-9ffc-271008a24b85)
sysinv 2020-01-20 19:27:00.465 99932 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data
sysinv 2020-01-20 19:27:01.325 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-4d15139b-e32e-48eb-8fe2-b858bdf91a02)
sysinv 2020-01-20 19:27:01.363 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-c1285f23-ca3e-4b19-80db-645f8e8fd669)
sysinv 2020-01-20 19:27:01.364 99932 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data
sysinv 2020-01-20 19:27:03.429 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-81fbb4c7-9c2e-4ed2-8754-4e664f95dc7a)
sysinv 2020-01-20 19:27:03.491 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-f971499e-c7fd-4889-8d00-9458899a34db)
sysinv 2020-01-20 19:27:03.492 99932 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data
sysinv 2020-01-20 19:29:15.026 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-9edf4a83-08ca-48f2-a9ad-073e5a91469d)
sysinv 2020-01-20 19:29:15.067 99932 WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-11cd01a2-2128-4f1d-8776-afbd73d63a2a)
sysinv 2020-01-20 19:29:15.067 99932 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data
sysinv 2020-01-20 19:31:11.100 99932 WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken: Token authorization failed
sysinv 2020-01-20 19:31:11.631 99932 WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken: Token authorization failed
sysinv 2020-01-20 19:31:59.933 99932 INFO sysinv.api.controllers.v1.host [-] controller-0 ihost_patch_start_2020-01-20-19-31-59 patch
sysinv 2020-01-20 19:31:59.933 99932 INFO sysinv.api.controllers.v1.host [-] controller-0 ihost_patch_end. No changes from mtce/1.0.
sysinv 2020-01-20 19:32:16.083 99932 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://controller-0:7777/v1/sm_sda hdr:{'Content-type': 'application/json', 'Accept': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None
sysinv 2020-01-20 19:32:16.109 99932 WARNING sysinv.api.controllers.v1.rest_api [-] HTTP Error e.code=401 e=HTTP Error 401: Unauthorized: HTTPError: HTTP Error 401: Unauthorized
sysinv 2020-01-20 19:32:16.110 99932 ERROR sysinv.api.controllers.v1.servicegroup [-] Bad response from SM API
sysinv 2020-01-20 19:32:16.111 99932 WARNING wsme.api [-] Client-side error: Bad response from SM API: ClientSideError: Bad response from SM API
sysinv 2020-01-20 19:35:14.989 99932 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://controller-0:7777/v1/sm_sda hdr:{'Content-type': 'application/json', 'Accept': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None
sysinv 2020-01-20 19:35:15.875 99932 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'sm_sda': [{u'status': u'', u'name': u'controller', u'service_group_name': u'distributed-cloud-services', u'uuid': u'b2859a70-1429-4b59-b83f-c166f89478e7', u'node_name': u'controller-0', u'state': u'active', u'id': 1, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'storage-monitoring-services', u'uuid': u'49764521-9587-4c9e-b7ac-3a80fe8f7355', u'node_name': u'controller-0', u'state': u'active', u'id': 2, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'storage-services', u'uuid': u'17daaeae-d511-4e5a-8bd6-84731975398d', u'node_name': u'controller-0', u'state': u'active', u'id': 3, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'web-services', u'uuid': u'828f1d87-2c32-452b-a337-6962a0f36516', u'node_name': u'controller-0', u'state': u'active', u'id': 4, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'directory-services', u'uuid': u'0409fb5d-e309-4f0a-a9e4-20453a7c0e01', u'node_name': u'controller-0', u'state': u'active', u'id': 5, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'patching-services', u'uuid': u'2a32affc-f7df-4adb-8976-dc0a95cf1b24', u'node_name': u'controller-0', u'state': u'active', u'id': 6, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'vim-services', u'uuid': u'29bd5722-67dc-4bac-a062-2d4cadd77192', u'node_name': u'controller-0', u'state': u'active', u'id': 7, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'cloud-services', u'uuid': u'948a547b-6b9c-4ac4-80fb-9460e8330e7b', u'node_name': u'controller-0', u'state': u'active', u'id': 8, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'controller-services', u'uuid': u'276d37be-6205-443b-8aa7-bd22d9c3e9ef', u'node_name': u'controller-0', u'state': u'active', u'id': 9, u'condition': u'', u'desired_state': u'active'}, {u'status': u'', u'name': u'controller', u'service_group_name': u'oam-services', u'uuid': u'77858045-8234-463b-98e5-c5cb24e851c4', u'node_name': u'controller-0', u'state': u'active', u'id': 10, u'condition': u'', u'desired_state': u'active'}]}

Test Activity
-------------
Developer Testing

Workaround
----------
Wait

Revision history for this message
Bart Wensley (bartwensley) wrote :

I discussed this bug with Andy Ning and he speculated that this issue is related to the use of memcached on the subclouds. I did a single test where I disabled memcached on a subcloud before managing it for the first time and the subcloud did not go offline when it was managed.

Revision history for this message
Ghada Khalil (gkhalil) wrote :

stx.4.0 / medium priority - distcloud issue, but workaround is to wait

tags: added: stx.4.0 stx.distcloud
Changed in starlingx:
importance: Undecided → Medium
status: New → Triaged
assignee: nobody → Tao Liu (tliu88)
Dariush Eslimi (deslimi)
Changed in starlingx:
assignee: Tao Liu (tliu88) → Ovidiu Poncea (ovidiu.poncea)
Frank Miller (sensfan22)
Changed in starlingx:
assignee: Ovidiu Poncea (ovidiu.poncea) → Bart Wensley (bartwensley)
Changed in starlingx:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to config (master)

Fix proposed to branch: master
Review: https://review.opendev.org/708729

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to config (master)

Reviewed: https://review.opendev.org/708729
Committed: https://git.openstack.org/cgit/starlingx/config/commit/?id=f6eebbd318f3c596c7d408696ce1558fd03a5497
Submitter: Zuul
Branch: master

commit f6eebbd318f3c596c7d408696ce1558fd03a5497
Author: Bart Wensley <email address hidden>
Date: Wed Feb 19 12:56:21 2020 -0600

    Disable keystone caching on subclouds

    The use of keystone caching on subclouds causes problems because
    the syncing of fernet keys to the subcloud results in stale
    cache entries. This causes authentication failures until the
    cache entries age out or new tokens are created.

    Since the keystone load in a subcloud is light, there is really
    no need for caching at this time - it is being disabled in
    subclouds.

    Change-Id: I777c57c46cf1bcd701fbbac73228a2cb81d8424b
    Closes-Bug: 1860372
    Signed-off-by: Bart Wensley <email address hidden>

Changed in starlingx:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to config (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/716137

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to config (f/centos8)
Download full text (32.3 KiB)

Reviewed: https://review.opendev.org/716137
Committed: https://git.openstack.org/cgit/starlingx/config/commit/?id=cb4cf4299c2ec10fb2eb03cdee3f6d78a6413089
Submitter: Zuul
Branch: f/centos8

commit 16477935845e1c27b4c9d31743e359b0aa94a948
Author: Steven Webster <email address hidden>
Date: Sat Mar 28 17:19:30 2020 -0400

    Fix SR-IOV runtime manifest apply

    When an SR-IOV interface is configured, the platform's
    network runtime manifest is applied in order to apply the virtual
    function (VF) config and restart the interface. This results in
    sysinv being able to determine and populate the puppet hieradata
    with the virtual function PCI addresses.

    A side effect of the network manifest apply is that potentially
    all platform interfaces may be brought down/up if it is determined
    that their configuration has changed. This will likely be the case
    for a system which configures SR-IOV interfaces before initial
    unlock.

    A few issues have been encountered because of this, with some
    services not behaving well when the interface they are communicating
    over suddenly goes down.

    This commit makes the SR-IOV VF configuration much more targeted
    so that only the operation of setting the desired number of VFs
    is performed.

    Closes-Bug: #1868584
    Depends-On: https://review.opendev.org/715669
    Change-Id: Ie162380d3732eb1b6e9c553362fe68cbc313ae2b
    Signed-off-by: Steven Webster <email address hidden>

commit 45c9fe2d3571574b9e0503af108fe7c1567007db
Author: Zhipeng Liu <email address hidden>
Date: Thu Mar 26 01:58:34 2020 +0800

    Add ipv6 support for novncproxy_base_url.

    For ipv6 address, we need url with below format
    [ip]:port

    Partial-Bug: 1859641

    Change-Id: I01a5cd92deb9e88c2d31bd1e16e5bce1e849fcc7
    Signed-off-by: Zhipeng Liu <email address hidden>

commit d119336b3a3b24d924e000277a37ab0b5f93aae1
Author: Andy Ning <email address hidden>
Date: Mon Mar 23 16:26:21 2020 -0400

    Fix timeout waiting for CA cert install during ansible replay

    During ansible bootstrap replay, the ssl_ca_complete_flag file is
    removed. It expects puppet platform::config::runtime manifest apply
    during system CA certificate install to re-generate it. So this commit
    updated conductor manager to run that puppet manifest even if the CA cert
    has already installed so that the ssl_ca_complete_flag file is created
    and makes ansible replay to continue.

    Change-Id: Ic9051fba9afe5d5a189e2be8c8c2960bdb0d20a4
    Closes-Bug: 1868585
    Signed-off-by: Andy Ning <email address hidden>

commit 24a533d800b2c57b84f1086593fe5f04f95fe906
Author: Zhipeng Liu <email address hidden>
Date: Fri Mar 20 23:10:31 2020 +0800

    Fix rabbitmq could not bind port to ipv6 address issue

    When we use Armada to deploy openstack service for ipv6, rabbitmq
    pod could not start listen on [::]:5672 and [::]:15672.
    For ipv6, we need an override for configuration file.

    Upstream patch link is:
    https://review.opendev.org/#/c/714027/

    Test pass for deploying rabbitmq service on both ipv...

tags: added: in-f-centos8
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.