Comment 2 for bug 2043412

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ansible-playbooks (master)

Reviewed: https://review.opendev.org/c/starlingx/ansible-playbooks/+/900820
Committed: https://opendev.org/starlingx/ansible-playbooks/commit/7a4aad2fdd5e5b62300a2e37b560a027c703e80c
Submitter: "Zuul (22348)"
Branch: master

commit 7a4aad2fdd5e5b62300a2e37b560a027c703e80c
Author: Felipe Sanches Zanoni <email address hidden>
Date: Mon Nov 13 15:39:43 2023 -0300

    Do not restore Ceph crush map when wiping Ceph OSD disks

    When running the restore playbook, the Ceph crush map was being
    restored even if the flag wipe_ceph_osds was set to true.

    The restore playbook was not checking the status of the wipe_ceph_osds
    flag. A check is added to the whole block. Now the Ceph crush map is
    only restored if the flag is set to false and there is Ceph backend
    configured.

    Ceph will be reconfigured during the unlock process of each node.

    Test Plan:
      PASS: B&R AIO-DX with wipe_ceph_osds=false
      PASS: B&R AIO-DX with wipe_ceph_osds=true
      PASS: B&R Standard with wipe_ceph_osds=false
      PASS: B&R Standard with wipe_ceph_osds=true
      PASS: B&R Storage with wipe_ceph_osds=false
      PASS: B&R Storage with wipe_ceph_osds=true

    Closes-bug: 2043412

    Change-Id: Ib4e4c6933bf7ff6b8f23d994f19a6e79c01dd2b1
    Signed-off-by: Felipe Sanches Zanoni <email address hidden>