Controller-0 showing disabled/offline in dm while it is unlocked/available in sysinv

Bug #1917781 reported by Mihnea Saracin
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
Low
Mihnea Saracin

Bug Description

Brief Description
-----------------
Controller-0 is shown as disabled/offline in DM

[2021-02-03 18:52:44,489] 314 DEBUG MainThread ssh.send :: Send 'kubectl get hosts -n=deployment -o=wide'
[2021-02-03 18:52:44,602] 436 DEBUG MainThread ssh.expect :: Output: NAME ADMINISTRATIVE OPERATIONAL AVAILABILITY PROFILE INSYNC RECONCILED controller-0 unlocked disabled offline controller-0-profile false false controller-1 controller-1-profile

But in sysinv, it is unlocked/available

[2021-02-03 18:12:39,506] 314 DEBUG MainThread ssh.send :: Send 'system --os-endpoint-type internalURL --os-region-name RegionOne host-show controller-0'
[2021-02-03 18:12:40,497] 436 DEBUG MainThread ssh.expect :: Output: +-----------------------+----------------------------------------------------------------------+ | Property | Value | +-----------------------+----------------------------------------------------------------------+ | action | none | | administrative | unlocked | | availability | available |

Steps to Reproduce

------------------

Fresh install the system

Expected Behavior

------------------

DM shows controller-0 unlocked and available

Actual Behavior

----------------

DM shows controller-0 disabled and offline

Reproducibility

---------------
Intermittent

System Configuration

--------------------

Distributed Cloud - System Controller

Branch/Pull Time/Commit

-----------------------

stx master build on "2020-02-01"

Timestamp/Logs

--------------

[sysadmin@controller-0 ~(keystone_admin)]$ [2021-02-03 18:52:03,257] 1785 INFO MainThread fresh_install_helper.wait_for_deploy_mgr_controller_config:: Waiting for controller-0 to become available and true: []
[2021-02-03 18:52:23,266] 69 INFO MainThread kube_helper.exec_kube_cmd:: exec_kube_cmd:kubectl get hosts -n=deployment -o=wide
[2021-02-03 18:52:23,267] 479 DEBUG MainThread ssh.exec_cmd:: Executing command...
[2021-02-03 18:52:23,267] 314 DEBUG MainThread ssh.send :: Send 'kubectl get hosts -n=deployment -o=wide'
[2021-02-03 18:52:23,382] 436 DEBUG MainThread ssh.expect :: Output: NAME ADMINISTRATIVE OPERATIONAL AVAILABILITY PROFILE INSYNC RECONCILED controller-0 unlocked disabled offline controller-0-profile false false controller-1 controller-1-profile

Test Activity
-------------
Regression Testing

Changed in starlingx:
assignee: nobody → Mihnea Saracin (msaracin)
Revision history for this message
Bob Church (rchurch) wrote :

After node reboot the deployment manager pod was left in a Unknown state. Adding the deployment manager's namespace to the pod recovery service will allow recovery of the deployment manager's functionality.

Fix proposed here: https://review.opendev.org/c/starlingx/integ/+/778737

Ghada Khalil (gkhalil)
tags: added: stx.containers
Changed in starlingx:
importance: Undecided → Low
status: New → Triaged
Revision history for this message
Bob Church (rchurch) wrote :
Changed in starlingx:
status: Triaged → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to integ (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/integ/+/793754

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to integ (f/centos8)
Download full text (37.0 KiB)

Reviewed: https://review.opendev.org/c/starlingx/integ/+/793754
Committed: https://opendev.org/starlingx/integ/commit/a13966754d4e19423874ca31bf1533f057380c52
Submitter: "Zuul (22348)"
Branch: f/centos8

commit b310077093fd567944c6a46b7d0adcabe1f2b4b9
Author: Mihnea Saracin <email address hidden>
Date: Sat May 22 18:19:54 2021 +0300

    Fix resize of filesystems in puppet logical_volume

    After system reinstalls there is stale data on the disk
    and puppet fails when resizing, reporting some wrong filesystem
    types. In our case docker-lv was reported as drbd when
    it should have been xfs.

    This problem was solved in some cases e.g:
    when doing a live fs resize we wipe the last 10MB
    at the end of partition:
    https://opendev.org/starlingx/stx-puppet/src/branch/master/puppet-manifests/src/modules/platform/manifests/filesystem.pp#L146

    Our issue happened here:
    https://opendev.org/starlingx/stx-puppet/src/branch/master/puppet-manifests/src/modules/platform/manifests/filesystem.pp#L65
    Resize can happen at unlock when a bigger size is detected for the
    filesystem and the 'logical_volume' will resize it.
    To fix this we have to wipe the last 10MB of the partition after the
    'lvextend' cmd in the 'logical_volume' module.

    Tested the following scenarios:

    B&R on SX with default sizes of filesystems and cgts-vg.

    B&R on SX with with docker-lv of size 50G, backup-lv also 50G and
    cgts-vg with additional physical volumes:

    - name: cgts-vg
        physicalVolumes:
        - path: /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
        size: 50
        type: partition
        - path: /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
        size: 30
        type: partition
        - path: /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0
        type: disk

    B&R on DX system with backup of size 70G and cgts-vg
    with additional physical volumes:

    physicalVolumes:
    - path: /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
        size: 50
        type: partition
    - path: /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0
        size: 30
        type: partition
    - path: /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0
        type: disk

    Closes-Bug: 1926591
    Change-Id: I55ae6954d24ba32e40c2e5e276ec17015d9bba44
    Signed-off-by: Mihnea Saracin <email address hidden>

commit 3225570530458956fd642fa06b83360a7e4e2e61
Author: Mihnea Saracin <email address hidden>
Date: Thu May 20 14:33:58 2021 +0300

    Execute once the ceph services script on AIO

    The MTC client manages ceph services via ceph.sh which
    is installed on all node types in
    /etc/service.d/{controller,worker,storage}/ceph.sh

    Since the AIO controllers have both controller and worker
    personalities, the MTC client will execute the ceph script
    twice (/etc/service.d/worker/ceph.sh,
    /etc/service.d/controller/ceph.sh).
    This behavior will generate some issues.

    We fix this by exiting the ceph script if it is the one from
    /etc/services.d/worker on AIO systems.

    Closes-Bug: 1928934
    Change-Id: I3e4dc313cc3764f870b8f6c640a60338...

tags: added: in-f-centos8
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.