Re-apply of application failed if the previous Armada apply failed with an exception

Bug #1911243 reported by Angie Wang
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
Medium
Angie Wang

Bug Description

Brief Description
-----------------
If the apply/remove of an application is failed due to an exception of the Armada operation or an abnormal exit of the Armada operation, the Armada lock cannot be released which causes the subsequent re-apply of the application to fail as it cannot acquire the lock.

Severity
--------
Major

Steps to Reproduce
------------------
The exception of armada operation may be caused by the connection issue(ie. lose connection to the k8s cluster/connection aborted by the k8s cluster)
To simulate,
1. Applying an application
2. Terminate the armada pod during armada apply

Expected Behavior
------------------
The armada lock is released and subsequent re-apply works.

Actual Behavior
----------------
The armada lock is not released and subsequent re-apply fails.

System Configuration
--------------------
Any types of system

Last Pass
---------
N/A

Timestamp/Logs
--------------
The first apply of platform-integ-apps:

sysinv 2020-07-29 15:33:28.025 101299 ERROR sysinv.conductor.kube_app [-] Armada request apply for manifest /manifests/platform-integ-apps/20.06-9/platform-integ-apps-manifest.yaml failed: ('Connection aborted.', BadStatusLine("''",)) : ConnectionError: ('Connection aborted.', BadStatusLine("''",))
sysinv 2020-07-29 15:33:28.027 101299 INFO sysinv.conductor.kube_app [-] Exiting progress monitoring thread for app platform-integ-apps

The re-try of platform-integ-apps:
sysinv 2020-07-29 15:33:28.028 101299 INFO sysinv.conductor.kube_app [-] platform-integ-apps app failed applying. Retrying.
2020-07-29 15:33:59.566 37 WARNING armada.handlers.lock [-] There is already an existing lock: kubernetes.client.rest.ApiException: (409)^[[00m
2020-07-29 15:34:59.796 37 ERROR armada.cli [-] Caught unexpected exception: armada.handlers.lock.LockException: Unable to acquire lock before timeout
2020-07-29 15:34:59.796 37 ERROR armada.cli Traceback (most recent call last):
2020-07-29 15:34:59.796 37 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke
2020-07-29 15:34:59.796 37 ERROR armada.cli self.invoke()
2020-07-29 15:34:59.796 37 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke
2020-07-29 15:34:59.796 37 ERROR armada.cli resp = self.handle(documents, tiller)
2020-07-29 15:34:59.796 37 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 72, in func_wrapper
2020-07-29 15:34:59.796 37 ERROR armada.cli with Lock(lock_name, bearer_token=bearer_token) as lock:
2020-07-29 15:34:59.796 37 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 180, in __enter__
2020-07-29 15:34:59.796 37 ERROR armada.cli self.acquire_lock()
2020-07-29 15:34:59.796 37 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 169, in acquire_lock
2020-07-29 15:34:59.796 37 ERROR armada.cli raise LockException("Unable to acquire lock before timeout")
2020-07-29 15:34:59.796 37 ERROR armada.cli armada.handlers.lock.LockException: Unable to acquire lock before timeout
2020-07-29 15:34:59.796 37 ERROR armada.cli ^[[00m

Test Activity
-------------
Developer Testing

Workaround
----------
kubectl delete locks.armada.process locks.armada.process.lock -n kube-system

Angie Wang (angiewang)
Changed in starlingx:
assignee: nobody → Angie Wang (angiewang)
summary: - Re-apply of application failed if the previous apply failed with an
- exception
+ Re-apply of application failed if the previous Armada apply failed with
+ an exception
Revision history for this message
Bob Church (rchurch) wrote :
Ghada Khalil (gkhalil)
Changed in starlingx:
importance: Undecided → Medium
status: New → Triaged
tags: added: stx.5.0 stx.config stx.containers
Ghada Khalil (gkhalil)
Changed in starlingx:
status: Triaged → In Progress
Revision history for this message
Angie Wang (angiewang) wrote :
Changed in starlingx:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to config (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/793460

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/793696

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/794611

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/794906

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on config (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/794611

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to config (f/centos8)
Download full text (147.3 KiB)

Reviewed: https://review.opendev.org/c/starlingx/config/+/794906
Committed: https://opendev.org/starlingx/config/commit/75758b37a5a23c8811355b67e2a430a1713cd85b
Submitter: "Zuul (22348)"
Branch: f/centos8

commit 9e420d9513e5fafb1df4d29567bc299a9e04d58d
Author: Bin Qian <email address hidden>
Date: Mon May 31 14:45:52 2021 -0400

    Add more logging to run docker login

    Add error log for running docker login. The new log could
    help identify docker login failure.

    Closes-Bug: 1930310
    Change-Id: I8a709fb6665de8301fbe3022563499a92b2a0211
    Signed-off-by: Bin Qian <email address hidden>

commit 31c77439d2cea590dfcca13cfa646522665f8686
Author: albailey <email address hidden>
Date: Fri May 28 13:42:42 2021 -0500

    Fix controller-0 downgrade failing to kill ceph

    kill_ceph_storage_monitor tried to manipulate a pmon
    file that does not exist in an AIO-DX environment.

    We no longer invoke kill_ceph_storage_monitor in an
    AIO SX or DX env.

    This allows: "system host-downgrade controller-0"
    to proceed in an AIO-DX environment where that second
    controller (controller-0) was upgraded.

    Partial-Bug: 1929884
    Signed-off-by: albailey <email address hidden>
    Change-Id: I633853f75317736084feae96b5b849c601204c13

commit 0dc99eee608336fe01b58821ea404286371f1408
Author: albailey <email address hidden>
Date: Fri May 28 11:05:43 2021 -0500

    Fix file permissions failure during duplex upgrade abort

    When issuing a downgrade for controller-0 in a duplex upgrade
    abort and rollback scenario, the downgrade command was failing
    because the sysinv API does not have root permissions to set
    a file flag.
    The fix is to use RPC so the conductor can create the flag
    and allow the downgrade for controller-0 to get further.

    Partial-Bug: 1929884
    Signed-off-by: albailey <email address hidden>
    Change-Id: I913bcad73309fe887a12cbb016a518da93327947

commit 7ef3724dad173754e40b45538b1cc726a458cc1c
Author: Chen, Haochuan Z <email address hidden>
Date: Tue May 25 16:16:29 2021 +0800

    Fix bug rook-ceph provision with multi osd on one host

    Test case:
    1, deploy simplex system
    2, apply rook-ceph with below override value
    value.yaml
    cluster:
      storage:
        nodes:
        - name: controller-0
          devices:
          - name: sdb
          - name: sdc
    3, reboot

    Without this fix, only osd pod could launch successfully after boot
    as vg start with ceph could not correctly add in sysinv-database

    Closes-bug: 1929511

    Change-Id: Ia5be599cd168d13d2aab7b5e5890376c3c8a0019
    Signed-off-by: Chen, Haochuan Z <email address hidden>

commit 23505ba77d76114cf8a0bf833f9a5bcd05bc1dd1
Author: Angie Wang <email address hidden>
Date: Tue May 25 18:49:21 2021 -0400

    Fix issue in partition data migration script

    The created partition dictonary partition_map is not
    an ordered dict so we need to sort it by its key -
    device node when iterating it to adjust the device
    nodes/paths for user created extra partitions to ensure
    the number of device node...

tags: added: in-f-centos8
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on config (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/793696

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/config/+/793460

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.