IPv6: lock host stuck in ceph get_monitors_status check
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Fix Released
|
Medium
|
Daniel Badea |
Bug Description
Brief Description
-----------------
Unable to lock any host. System host-lock command was in stucked state . This was observed during the patch install test on IPv6 lab(wolfpass-
Launched 50 pods using resource-consumer –image . Pods are active
upload test patch successful
apply test patch successful
Using horizon patch orchestration sw-patch strategy created successfully
orchestration sw-patch strategy applied successfully.
Orchestration locked standby controller-1 and swacted successfully then failed on locking new standby lock.
Manually tried to lock new standby failed and the prompt was not returned. This was continue to be there and lock was tried on other hosts it was same issue. After deleting resource consumer pods also same behavior.
sudo sw-patch show 2019-09-
Password:
2019-09-
Release: 19.09
Patch State: Partial-Apply
RR: Y
Summary: Patch to /etc/init.d/logmgmt
Contents:
sw-patch query-hosts
Hostname IP Address Patch Current Reboot Required Release State
============ =======
compute-0 face::fb1:
compute-1 face::5455:
compute-2 face::75a5:
controller-0 face::3 No Yes 19.09 idle
controller-1 face::4 Yes No 19.09 idle
$ sw-patch query
Patch ID RR Release Patch State
=======
2019-09-
Sysinv logs showing which is not sure
19-09-06 15:51:56.996 110960 INFO sysinv.
2019-09-06 15:54:31.036 110951 INFO sysinv.
2019-09-06 15:54:31.141 110951 INFO sysinv.
2019-09-06 15:54:31.181 110957 INFO sysinv.
2019-09-06 15:54:31.289 110957 INFO sysinv.
2019-09-06 15:54:31.292 110960 INFO sysinv.
2019-09-06 15:54:31.389 110957 INFO sysinv.
2019-09-06 15:54:31.397 110960 INFO sysinv.
2019-09-06 15:54:31.492 110960 INFO sysinv.
2019-09-06 15:55:13.765 110957 INFO sysinv.
2019-09-06 15:55:28.128 282169 INFO sysinv.
2019-09-06 15:56:03.015 286216 INFO sysinv.
2019-09-06 15:56:14.259 287377 INFO sysinv.
2019-09-06 15:56:15.769 110953 INFO sysinv.
2019-09-06 15:56:15.886 110960 INFO sysinv.
2019-09-06 15:57:24.770 110960 INFO sysinv.
2019-09-06 15:59:31.052 110951 INFO sysinv.
2019-09-06 15:59:32.760 110951 INFO sysinv.
2019-09-06 16:00:33.771 110951 INFO sysinv.
2019-09-06 16:00:33.776 110957 INFO sysinv.
2019-09-06 16:00:33.812 110960 INFO sysinv.
2019-09-06 16:00:33.820 110960 INFO sysinv.
2019-09-06 16:01:40.769 110951 INFO sysinv.
2019-09-06 16:04:31.220 110957 INFO sysinv.
2019-09-06 16:04:31.325 110957 INFO sysinv.
Severity
--------
Major
Steps to Reproduce
------------------
1.Create 50 pods using resource consumer
kubectl run resource-consumer --image=
kubectl get services resource-consumer
kubectl scale deploy/
2. Upload patches
3. Apply patch
3. Create patch strategy using horizon orchestion
4. Apply patch strategy and monitor for failure. Host lock standby and swact was successful new standby lock failed.
5. Tried host-lock manually failed as per description
System Configuration
-------
Regular system
Expected Behavior
------------------
Host-lock should be locking host and returning prompt. If there is a error should show and error and return prompt
Actual Behavior
----------------
As per description hosts fail to lock host.
Reproducibility
---------------
Unable to lock all the time . Lab was in that state.
Above test scenario was tried 3 times seen once .
System Configuration
-------
Regular system IPV6
Load
----
Build date 2019-09-05 00:13:38
Last Pass
---------
Not available
Timestamp/Logs
--------------
Manual lock
2019-09-
Test Activity
-------------
Regression test
description: | updated |
description: | updated |
tags: | added: stx.retestneeded |
Marking as stx.3.0 - maybe an issue specific to IPv6 (scenario not tested previously).