simplex subsequent unlock failed (traceback KubeAppApplyFailure)

Bug #1840351 reported by Wendy Mitchell
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
StarlingX
Invalid
Medium
Daniel Badea

Bug Description

Brief Description
-----------------
After clean install of simplex system (no HT enabled) an attempt to unlock the remote-storage=enabled label to the running system fails

Severity
--------
Major

Steps to Reproduce
------------------
1. clean install of AIO-SX system
controller-0 is running (unlocked) and stx-openstack has been applied successfully

$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
[sysadmin@controller-0 ~(keystone_admin)]$ date
Thu Aug 15 17:47:37 UTC 2019
[sysadmin@controller-0 ~(keystone_admin)]$ system application-list
+---------------------+------------------------------+-------------------------------+----------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+---------------------+------------------------------+-------------------------------+----------------+---------+-----------+
| platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed |
| stx-openstack | 1.0-17-centos-stable- | armada-manifest | stx-openstack. | applied | completed |
| | versioned | | yaml | | |
| | | | | | |
+---------------------+------------------------------+-------------------------------+----------------+---------+-----------+
[sysadmin@controller-0 ~(keystone_admin)]$ date
Thu Aug 15 17:47:46 UTC 2019

2. add the remote-storage label after locking the controller
$ system host-lock controller-0

$ system host-label-assign controller-0 remote-storage=enabled
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | 93648095-2057-4022-a902-da0c64cb9255 |
| host_uuid | 49b05709-aa5f-4505-b179-b555e9da5f7a |
| label_key | remote-storage |
| label_value | enabled |
+-------------+--------------------------------------+
[sysadmin@controller-0 ~(keystone_admin)]$ date
Thu Aug 15 17:50:48 UTC 2019

3. Attempt to unlock the controller

$ system host-unlock controller-0
+---------------------+--------------------------------------------+
| Property | Value |
+---------------------+--------------------------------------------+
| action | none |
| administrative | locked |
| availability | online |
| bm_ip | None |
| bm_type | None |
| bm_username | None |
| boot_device | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0 |
| capabilities | {u'stor_function': u'monitor'} |
| config_applied | 32b6cc3a-d900-4eb5-b062-87004254d12b |
| config_status | None |
| config_target | 32b6cc3a-d900-4eb5-b062-87004254d12b |
| console | ttyS0,115200n8 |
| created_at | 2019-08-15T16:08:13.027513+00:00 |
| hostname | controller-0 |
| id | 1 |
| install_output | text |
| install_state | None |
| install_state_info | None |
| inv_state | inventoried |
| invprovision | provisioned |
| location | {} |
| mgmt_ip | 192.168.204.3 |
| mgmt_mac | 00:00:00:00:00:00 |
| operational | disabled |
| personality | controller |
| reserved | False |
| rootfs_device | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0 |
| serialid | None |
| software_load | 19.01 |
| subfunction_avail | online |
| subfunction_oper | disabled |
| subfunctions | controller,worker |
| task | Unlocking |
| tboot | false |
| ttys_dcd | None |
| updated_at | 2019-08-15T17:50:55.723407+00:00 |
| uptime | 4903 |
| uuid | 49b05709-aa5f-4505-b179-b555e9da5f7a |
| vim_progress_status | services-disabled |
+---------------------+---------------------

Expected Behavior
------------------
Expect successful unlock of the controller after adding the remote label

Actual Behavior
----------------

$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
[sysadmin@controller-0 ~(keystone_admin)]$ date
Thu Aug 15 17:51:56 UTC 2019

$ system application-list
+---------------------+------------------------------+-------------------------------+----------------+--------------+-------------------------------+
| application | version | manifest name | manifest file | status | progress |
+---------------------+------------------------------+-------------------------------+----------------+--------------+-------------------------------+
| platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed |
| stx-openstack | 1.0-17-centos-stable- | armada-manifest | stx-openstack. | apply-failed | operation aborted, check logs |
| | versioned | | yaml | | for detail |
| | | | | | |
+---------------------+------------------------------+-------------------------------+----------------+--------------+-------------------------------+

[sysadmin@controller-0 ~(keystone_admin)]$ date
Thu Aug 15 17:52:23 UTC 2019

$ kubectl get pods -n openstack
NAME READY STATUS RESTARTS AGE
cinder-api-78f48686bb-96lld 1/1 Running 0 47m
cinder-backup-754c8dd97f-mskwn 1/1 Running 0 47m
cinder-backup-storage-init-9t2zm 0/1 Completed 0 47m
cinder-bootstrap-xtcbx 0/1 Completed 0 47m
cinder-create-internal-tenant-99mwr 0/1 Completed 0 47m
cinder-db-init-vj54q 0/1 Completed 0 47m
cinder-db-sync-zsmbn 0/1 Completed 0 47m
cinder-ks-endpoints-n4tn4 0/9 Completed 0 47m
cinder-ks-service-nhnmq 0/3 Completed 0 47m
cinder-ks-user-dbt4q 0/1 Completed 0 47m
cinder-rabbit-init-tzlqp 0/1 Completed 0 47m
cinder-scheduler-5ffb4cdcbc-mds46 1/1 Running 0 47m
cinder-storage-init-ljw9m 0/1 Completed 0 47m
cinder-volume-5db657df8-d8f8z 1/1 Running 0 47m
cinder-volume-usage-audit-1565890800-rpzh4 0/1 Completed 0 12m
cinder-volume-usage-audit-1565891100-qcwkx 0/1 Completed 0 7m49s
cinder-volume-usage-audit-1565891400-h7gc6 0/1 Completed 0 2m46s
glance-api-5c8b54bfd7-fqtqx 1/1 Running 0 50m
glance-db-init-d9gpf 0/1 Completed 0 50m
glance-db-sync-mqb44 0/1 Completed 0 50m
glance-ks-endpoints-2bkrl 0/3 Completed 0 50m
glance-ks-service-t7999 0/1 Completed 0 50m
glance-ks-user-ftc89 0/1 Completed 0 50m
glance-rabbit-init-gxkxn 0/1 Completed 0 50m
glance-storage-init-6rvbs 0/1 Completed 0 50m
heat-api-5fb8fbd579-wngdx 1/1 Running 0 37m
heat-bootstrap-5447h 0/1 Completed 0 37m
heat-cfn-6df7cb559-rbr96 1/1 Running 0 37m
heat-db-init-tx9fs 0/1 Completed 0 37m
heat-db-sync-6tmsz 0/1 Completed 0 37m
heat-domain-ks-user-98hrt 0/1 Completed 0 37m
heat-engine-cfbd878c9-tcj6s 1/1 Running 0 37m
heat-engine-cleaner-1565890800-xkwzw 0/1 Completed 0 12m
heat-engine-cleaner-1565891100-zvl5f 0/1 Completed 0 7m49s
heat-engine-cleaner-1565891400-hddz6 0/1 Completed 0 2m46s
heat-ks-endpoints-snd6b 0/6 Completed 0 37m
heat-ks-service-cxhgl 0/2 Completed 0 37m
heat-ks-user-dbsh4 0/1 Completed 0 37m
heat-rabbit-init-bdrz8 0/1 Completed 0 37m
heat-trustee-ks-user-k2rsv 0/1 Completed 0 37m
heat-trusts-qvdqf 0/1 Completed 0 37m
horizon-db-init-hsshs 0/1 Completed 0 35m
horizon-db-sync-lmgxk 0/1 Completed 0 35m
horizon-dc64d5dc9-7pfhn 0/1 Running 0 35m
ingress-bc998b85d-2bw5x 1/1 Running 0 54m
ingress-error-pages-96d7ffbdf-sb2dx 1/1 Running 0 54m
keystone-api-78f4777456-zp8fm 1/1 Running 0 51m
keystone-bootstrap-6xtzx 0/1 Completed 0 51m
keystone-credential-setup-nxhn6 0/1 Completed 0 51m
keystone-db-init-t76bc 0/1 Completed 0 51m
keystone-db-sync-qzv8k 0/1 Completed 0 51m
keystone-domain-manage-zj77w 0/1 Completed 0 51m
keystone-fernet-setup-7dlg4 0/1 Completed 0 51m
keystone-rabbit-init-vk6nd 0/1 Completed 0 51m
libvirt-libvirt-default-d7wqx 1/1 Running 0 45m
mariadb-ingress-6d547f94f7-244tx 1/1 Running 0 53m
mariadb-ingress-error-pages-55f8b6b5fc-v4rlg 1/1 Running 0 53m
mariadb-server-0 1/1 Running 0 53m
neutron-db-init-hcrzc 0/1 Completed 0 45m
neutron-db-sync-snskn 0/1 Completed 0 45m
neutron-dhcp-agent-controller-0-937646f6-8vfbd 1/1 Running 0 45m
neutron-ks-endpoints-jsbmg 0/3 Completed 0 45m
neutron-ks-service-q48xz 0/1 Completed 0 45m
neutron-ks-user-22sq6 0/1 Completed 0 45m
neutron-l3-agent-controller-0-937646f6-7nqms 1/1 Running 0 45m
neutron-metadata-agent-controller-0-937646f6-fcljr 1/1 Running 0 45m
neutron-ovs-agent-controller-0-937646f6-pk44g 1/1 Running 0 45m
neutron-rabbit-init-kqhb8 0/1 Completed 0 45m
neutron-server-76f487d475-wc4xv 1/1 Running 0 45m
neutron-sriov-agent-controller-0-937646f6-ssgvh 1/1 Running 0 45m
nova-api-metadata-7bbfb8f59c-sgfkk 1/1 Running 1 45m
nova-api-osapi-5d5499c98d-q6mw5 1/1 Running 0 45m
nova-api-proxy-7f4bf644cf-m85xx 1/1 Running 0 45m
nova-bootstrap-2vh89 0/1 Completed 0 45m
nova-cell-setup-dn2nt 0/1 Completed 0 45m
nova-compute-controller-0-937646f6-wgbnp 2/2 Running 0 45m
nova-conductor-6f85fccf9f-786l8 1/1 Running 0 45m
nova-consoleauth-6f5b755bcf-8dxnw 1/1 Running 0 45m
nova-db-init-bfddb 0/3 Completed 0 45m
nova-db-sync-kpjst 0/1 Completed 0 45m
nova-ks-endpoints-rmrz6 0/3 Completed 0 45m
nova-ks-service-jnrvw 0/1 Completed 0 45m
nova-ks-user-tmhgn 0/1 Completed 0 45m
nova-novncproxy-d569fc8bf-hclkz 1/1 Running 0 45m
nova-rabbit-init-8cfvt 0/1 Completed 0 45m
nova-scheduler-79c996fbd6-lqpjk 1/1 Running 0 45m
nova-storage-init-lcqmg 0/1 Completed 0 45m
osh-openstack-memcached-memcached-b56979599-mq6lq 1/1 Running 0 52m
osh-openstack-rabbitmq-cluster-wait-2nbgc 0/1 Completed 0 52m
osh-openstack-rabbitmq-rabbitmq-0 1/1 Running 0 52m
placement-api-7ff8b4d99c-v9l9j 1/1 Running 0 45m
placement-db-init-8zhww 0/1 Completed 0 45m
placement-db-sync-2hstd 0/1 Completed 0 45m
placement-ks-endpoints-tg5ct 0/3 Completed 0 45m
placement-ks-service-lhfv7 0/1 Completed 0 45m
placement-ks-user-67dkz 0/1 Completed 0 45m

2019-08-15 17:51:19.302 102439 INFO sysinv.api.controllers.v1.host [-] controller-0 Action unlock perform notify_mtce
2019-08-15 17:51:19.315 102439 INFO sysinv.api.controllers.v1.host [-] Remove unlock ready flag
2019-08-15 17:51:19.348 102439 INFO sysinv.api.controllers.v1.mtce_api [-] number of calls to rest_api_request=1 (max_retry=3)
2019-08-15 17:51:19.350 102439 INFO sysinv.api.controllers.v1.rest_api [-] PATCH cmd:http://localhost:2112/v1/hosts/49b05709-aa5f-4505-b179-b555e9da5f7a hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:{"tboot": "false",

"ttys_dcd": null, "subfunctions": "controller,worker", "bm_ip": null, "install_state": null, "rootfs_device": "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0", "bm_username": null, "operation": "modify", "serialid": null, "id": 1, "console": "ttyS0,115200n8",

"uuid": "49b05709-aa5f-4505-b179-b555e9da5f7a", "mgmt_ip": "192.168.204.3", "software_load": "19.01", "config_status": null, "hostname": "controller-0", "iscsi_initiator_name": "iqn.1994-05.com.redhat:d981b2216514", "capabilities": {"stor_function":

"monitor"}, "install_output": "text", "location": {}, "availability": "online", "invprovision": "provisioned", "peer_id": null, "administrative": "locked", "personality": "controller", "recordtype": "standard", "bm_mac": null, "inv_state": "inventoried",

"mtce_info": null, "isystem_uuid": "96d164ae-6cf6-4943-adf4-9d80334f7593", "boot_device": "/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0", "install_state_info": null, "mgmt_mac": "00:00:00:00:00:00", "subfunction_oper": "disabled", "target_load": "19.01",

"vsc_controllers": null, "operational": "disabled", "subfunction_avail": "online", "action": "unlock", "bm_type": null}
2019-08-15 17:51:19.364 102439 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'pass'}
2019-08-15 17:51:19.551 102439 INFO sysinv.api.controllers.v1.host [-] Reapplying the stx-openstack app
2019-08-15 17:51:19.601 101429 INFO sysinv.conductor.kube_app [-] Register the initial abort status of app stx-openstack
2019-08-15 17:51:19.622 102439 INFO sysinv.api.controllers.v1.host [-] host controller-0 ihost_patch_end_2019-08-15-17-51-19 patch
2019-08-15 17:51:19.729 101429 INFO sysinv.conductor.kube_app [-] Application stx-openstack (1.0-17-centos-stable-versioned) apply started.
2019-08-15 17:51:20.694 102439 INFO sysinv.api.controllers.v1.host [-] controller-0 1. delta_handle ['uptime', 'task']
2019-08-15 17:51:21.988 101429 INFO sysinv.conductor.kube_app [-] Generating application overrides...
2019-08-15 17:51:25.149 101429 INFO sysinv.helm.neutron [req-f3f53d34-5fd0-4666-810d-247408190376 admin admin] _get_neutron_ml2_config={'ml2': {'physical_network_mtus': 'group0-data0:1500,group0-data1:1500'}, 'ml2_type_flat': {'flat_networks': ''}}
2019-08-15 17:51:30.979 101429 ERROR sysinv.conductor.kube_app [-] Command '['helm', 'install', '--dry-run', '--debug', '--values', '/tmp/tmpeVUOC4', '--values', '/tmp/tmpmk_v2W', '/tmp/tmpve5P2o']' returned non-zero exit status 1
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app Traceback (most recent call last):
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1797, in perform_app_apply
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app armada_format=True, armada_chart_info=app.charts, combined=True)
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 41, in _wrapper
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app return func(self, *args, **kwargs)
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 561, in generate_helm_application_overrides
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app file_overrides=file_overrides)
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 432, in merge_overrides
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app output = subprocess.check_output(cmd, env=env)
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app raise CalledProcessError(retcode, cmd, output=output)
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app CalledProcessError: Command '['helm', 'install', '--dry-run', '--debug', '--values', '/tmp/tmpeVUOC4', '--values', '/tmp/tmpmk_v2W', '/tmp/tmpve5P2o']' returned non-zero exit status 1
2019-08-15 17:51:30.979 101429 TRACE sysinv.conductor.kube_app
2019-08-15 17:51:31.184 101429 ERROR sysinv.conductor.kube_app [-] Application apply aborted!.
2019-08-15 17:51:31.185 101429 INFO sysinv.conductor.kube_app [-] Deregister the abort status of app stx-openstack
2019-08-15 17:51:31.186 101429 ERROR sysinv.openstack.common.rpc.amqp [req-f3f53d34-5fd0-4666-810d-247408190376 admin admin] Exception during message handling
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp Traceback (most recent call last):
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 438, in _process_data
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp **args)
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs)
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 10175, in perform_app_apply
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode)
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1834, in perform_app_apply
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e)
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-17-centos-stable-versioned) failed: Command '['helm', 'install', '--dry-run', '--debug', '--values', '/tmp/tmpeVUOC4',

'--values', '/tmp/tmpmk_v2W', '/tmp/tmpve5P2o']' returned non-zero exit status 1
2019-08-15 17:51:31.186 101429 TRACE sysinv.openstack.common.rpc.amqp

Reproducibility
---------------
Reproducible

System Configuration
--------------------
One node system

Branch/Pull Time/Commit
-----------------------
2019-08-12_20-59-00

Last Pass
---------

Timestamp/Logs
--------------
see inline

~17:51:31 in the logs

alarm events from horizon as follows:
        2019-08-15T14:00:06 set 100.101 Platform CPU threshold exceeded ; threshold 95.00%, actual 96.41% host=controller-0 critical
 2019-08-15T13:58:06 set 100.101 Platform CPU threshold exceeded ; threshold 90.00%, actual 93.80% host=controller-0 major
 2019-08-15T13:56:06 set 100.101 Platform CPU threshold exceeded ; threshold 95.00%, actual 99.91% host=controller-0 critical
 2019-08-15T13:54:06 set 100.101 Platform CPU threshold exceeded ; threshold 90.00%, actual 93.83% host=controller-0 major
 2019-08-15T13:52:06 set 100.101 Platform CPU threshold exceeded ; threshold 95.00%, actual 99.94% host=controller-0 critical
 2019-08-15T13:51:31 clear 750.004 Application Apply In Progress k8s_application=stx-openstack warning
 2019-08-15T13:51:31 set 750.002 Application Apply Failure k8s_application=stx-openstack major
 2019-08-15T13:51:19 set 750.004 Application Apply In Progress k8s_application=stx-openstack warning
 2019-08-15T13:51:19 clear 200.001 controller-0 was administratively locked to take it out-of-service. host=controller-0 warning
 2019-08-15T13:51:19 log 200.021 controller-0 manual 'unlock' request host=controller-0.command=unlock not-applicable
 2019-08-15T13:50:06 set 100.101 Platform CPU threshold exceeded ; threshold 95.00%, actual 99.92% host=controller-0 critical
 2019-08-15T13:48:40 log 275.001 Host controller-0 hypervisor is now locked-disabled host=controller-0.hypervisor=05c21c86-a8f3-4828-919d-b470d581079a critical

Test Activity
-------------
retest of other launchpad

Revision history for this message
Wendy Mitchell (wmitchellwr) wrote :
summary: - simplex unlock failed (traceback KubeAppApplyFailure))
+ simplex subsequent unlock failed (traceback KubeAppApplyFailure)
Revision history for this message
Peng Peng (ppeng) wrote :

Issue was reproduced on SM-3 sanity run.
Lab: SM_3
Load: 2019-08-15_20-59-00
Job: StarlingX_2.0_build

[2019-08-16 13:55:44,483] 301 DEBUG MainThread ssh.send :: Send 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://192.168.204.2:5000/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne host-unlock controller-0'
[2019-08-16 13:55:56,247] 423 DEBUG MainThread ssh.expect :: Output:
+---------------------+--------------------------------------------+
| Property | Value |
+---------------------+--------------------------------------------+
| action | none |
| administrative | locked |
| availability | online |

[2019-08-16 14:03:29,941] 280 WARNING MainThread ssh.wait_for_disconnect:: Did not disconnect to 128.224.150.81 within 300s

[2019-08-16 14:19:19,632] 301 DEBUG MainThread ssh.send :: Send 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://192.168.204.2:5000/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne host-show controller-0'
[2019-08-16 14:19:23,872] 423 DEBUG MainThread ssh.expect :: Output:
+---------------------+----------------------------------------------------------------------+
| Property | Value |
+---------------------+----------------------------------------------------------------------+
| action | none |
| administrative | locked |

Numan Waheed (nwaheed)
tags: added: stx.retestneeded
Revision history for this message
Ghada Khalil (gkhalil) wrote :

So does the unlock only fail when the remote storage label is applied before the unlock? Is the sanity failure the same as the originally reported issue?

Changed in starlingx:
status: New → Incomplete
Revision history for this message
Peng Peng (ppeng) wrote :

Sanity failure was "host-unlock failed" only, no storage label applied.

Revision history for this message
Ghada Khalil (gkhalil) wrote :

@Peng, The sanity issue doesn't match the original issue reported in this bug. Please open a new bug for the sanity issue.

tags: added: stx.storage
Changed in starlingx:
importance: Undecided → Medium
status: Incomplete → Triaged
Revision history for this message
Ghada Khalil (gkhalil) wrote :

Marking as stx.3.0 / medium priority as a user should be able to set this label on simplex. However, we don't believe this is a high runner scenario.

tags: added: stx.3.0 stx.containers
removed: stx.storage
Changed in starlingx:
assignee: nobody → Bob Church (rchurch)
Revision history for this message
Wendy Mitchell (wmitchellwr) wrote :

reproduceable
BUILD_ID="20190821T053000Z"

Frank Miller (sensfan22)
Changed in starlingx:
assignee: Bob Church (rchurch) → Daniel Badea (daniel.badea)
Revision history for this message
Daniel Badea (daniel.badea) wrote :

Unable to reproduce in AIO-SX VirtualBox lab with developer build from "2019-09-06 14:36:46 +0000"
Test steps:
1. install aiosx
2. upload and apply stx-openstack
3. lock controller-0
4. add remote-storage label
5. unlock controller-0

Result:
1. controller-0 reboots and becomes unlocked/available
2. stx-openstack is applied/completed

Revision history for this message
Daniel Badea (daniel.badea) wrote :

Needs retest.

Changed in starlingx:
status: Triaged → Invalid
Revision history for this message
Wendy Mitchell (wmitchellwr) wrote :

was able to apply remote storage label successfull, unlock the controller as well
hw: wcp_78
2019-09-20_10-09-22

tags: removed: stx.retestneeded
Revision history for this message
Wendy Mitchell (wmitchellwr) wrote :

Note: This actually will raise this alarm now indicating to reapply the stx-openstack
750.006 A configuration change requires a reapply of the stx-openstack application. k8s_application=stx-openstack

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.