while upgrading kubernetes, got stuck in state "upgrading-first-master"

Bug #2024895 reported by Boovan Rajendran
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
In Progress
Undecided
Boovan Rajendran

Bug Description

Brief Description:

when the _get_kubernetes_join_cmd() hit an exception, the code in kube_upgrade_control_plane() in sysinv/conductor/manager.py did not properly transition to a kubernetes.KUBE_UPGRADING_FIRST_MASTER_FAILED state to represent the failure.

Severity

Severe (Without manual action, cannot retry the K8s upgrade.)

Steps to Reproduce:

 . Install System Controller with 1.23.1 K8s.
 . Create K8s Upgrade Strategy ($ sw-manager kube-upgrade-strategy create --to-version v1.24.4 --alarm-restriction relaxed)
 . Apply the Strategy ($ sw-manager kube-upgrade-strategy apply)
 . Watch the progress ($ watch sw-manager kube-upgrade-strategy show)

Expected Behavior:

System should transition to the kubernetes.KUBE_UPGRADING_FIRST_MASTER_FAILED state to represent the failure.

Actual Behavior:

System remains in kubernetes.KUBE_UPGRADING_FIRST_MASTER state.

Reproducibility:

Reproducible

System Configuration:

N/A

Alarms:

N/A

Test Activity:

Developer Testing

Workaround:

Manually edit the sysyinv DB and set the state to kubernetes.KUBE_UPGRADING_FIRST_MASTER_FAILED

Changed in starlingx:
assignee: nobody → Boovan Rajendran (brajendr)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to config (master)

Fix proposed to branch: master
Review: https://review.opendev.org/c/starlingx/config/+/886813

Changed in starlingx:
status: New → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.