Activity log for bug #2021887

Date Who What changed Old value New value Message
2023-05-30 17:53:38 Luan Nunes Utimura bug added bug
2023-05-30 17:56:03 Luan Nunes Utimura description Brief Description ----------------- After applying stx-openstack, it has been observed that a Ceph-related alarm is triggered shortly afterwards, due to pools not being associated with the applications using them. Severity -------- Major. Steps to Reproduce ------------------ 1) Upload/apply stx-openstack; 2) Verify that a Ceph-related alarm was triggered. Expected Behavior ------------------ Ceph should be healthy before/after the app is applied. Actual Behavior ---------------- Ceph is unhealthy after the app is applied. Reproducibility --------------- Reproducible. System Configuration -------------------- AIO-SX, but should be observable in all configurations. Branch/Pull Time/Commit ----------------------- Branch `master`. Last Pass --------- N/A. Timestamp/Logs -------------- Attach the logs for debugging (use attachments in Launchpad or for large collect files use: https://files.starlingx.kube.cengn.ca/) Provide a snippet of logs here and the timestamp when issue was seen. Please indicate the unique identifier in the logs to highlight the problem Test Activity ------------- Developer Testing. Workaround ---------- To work around this, one must manually enable the applications on pools: * ceph osd pool application enable [...]; (Check the output of `ceph health detail` for more information) Brief Description ----------------- After applying stx-openstack, it has been observed that a Ceph-related alarm is triggered shortly afterwards, due to pools not being associated with the applications using them. Severity -------- Major. Steps to Reproduce ------------------ 1) Upload/apply stx-openstack; 2) Verify that a Ceph-related alarm was triggered. Expected Behavior ------------------ Ceph should be healthy before/after the app is applied. Actual Behavior ---------------- Ceph is unhealthy after the app is applied. Reproducibility --------------- Reproducible. System Configuration -------------------- AIO-SX, but should be observable in all configurations. Branch/Pull Time/Commit ----------------------- Branch `master`. Last Pass --------- N/A. Timestamp/Logs -------------- sysadmin@controller-0:~$ fm alarm-list +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ | UUID | Alarm ID | Reason Text | Entity ID | Management Affecting | Severity | Time Stamp | +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ | <uuid> | 800.001 | Storage Alarm Condition: HEALTH_WARN. Please check 'ceph -s' for more details. | cluster=<cluster-uuid> | True | warning | 2023-05-11T07:05:29.285634 | +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ sysadmin@controller-0:~$ ceph -s cluster: id: <uuid> health: HEALTH_WARN application not enabled on 2 pool(s) services: mon: 3 daemons, quorum controller-0,controller-1,compute-0 (age 12h) mgr: controller-0(active, since 20h), standbys: controller-1 mds: kube-cephfs:1 {0=controller-0=up:active} 2 up:standby osd: 2 osds: 2 up (since 20h), 2 in (since 22h) data: pools: 7 pools, 704 pgs objects: 634 objects, 2.5 GiB usage: 28 GiB used, 3.2 TiB / 3.3 TiB avail pgs: 704 active+clean io: client: 440 KiB/s wr, 0 op/s rd, 49 op/s wr Test Activity ------------- Developer Testing. Workaround ---------- To work around this, one must manually enable the applications on pools: * ceph osd pool application enable [...]; (Check the output of `ceph health detail` for more information)
2023-05-30 17:56:12 Luan Nunes Utimura starlingx: assignee Luan Nunes Utimura (lutimura)
2023-05-30 17:59:15 OpenStack Infra starlingx: status New In Progress
2023-05-31 18:08:07 Luan Nunes Utimura description Brief Description ----------------- After applying stx-openstack, it has been observed that a Ceph-related alarm is triggered shortly afterwards, due to pools not being associated with the applications using them. Severity -------- Major. Steps to Reproduce ------------------ 1) Upload/apply stx-openstack; 2) Verify that a Ceph-related alarm was triggered. Expected Behavior ------------------ Ceph should be healthy before/after the app is applied. Actual Behavior ---------------- Ceph is unhealthy after the app is applied. Reproducibility --------------- Reproducible. System Configuration -------------------- AIO-SX, but should be observable in all configurations. Branch/Pull Time/Commit ----------------------- Branch `master`. Last Pass --------- N/A. Timestamp/Logs -------------- sysadmin@controller-0:~$ fm alarm-list +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ | UUID | Alarm ID | Reason Text | Entity ID | Management Affecting | Severity | Time Stamp | +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ | <uuid> | 800.001 | Storage Alarm Condition: HEALTH_WARN. Please check 'ceph -s' for more details. | cluster=<cluster-uuid> | True | warning | 2023-05-11T07:05:29.285634 | +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ sysadmin@controller-0:~$ ceph -s cluster: id: <uuid> health: HEALTH_WARN application not enabled on 2 pool(s) services: mon: 3 daemons, quorum controller-0,controller-1,compute-0 (age 12h) mgr: controller-0(active, since 20h), standbys: controller-1 mds: kube-cephfs:1 {0=controller-0=up:active} 2 up:standby osd: 2 osds: 2 up (since 20h), 2 in (since 22h) data: pools: 7 pools, 704 pgs objects: 634 objects, 2.5 GiB usage: 28 GiB used, 3.2 TiB / 3.3 TiB avail pgs: 704 active+clean io: client: 440 KiB/s wr, 0 op/s rd, 49 op/s wr Test Activity ------------- Developer Testing. Workaround ---------- To work around this, one must manually enable the applications on pools: * ceph osd pool application enable [...]; (Check the output of `ceph health detail` for more information) Brief Description ----------------- After applying stx-openstack and creating images/volumes, it has been observed that a Ceph alarm is triggered shortly afterwards, due to pools not being associated with the applications using them. Severity -------- Major. Steps to Reproduce ------------------ 1) Upload/apply stx-openstack; 2) Create images/volumes; 3) Verify that a Ceph alarm was triggered. Expected Behavior ------------------ Ceph should be healthy before/after the app is applied and the images/volumes are created. Actual Behavior ---------------- Ceph is unhealthy after the app is applied and the images/volumes are created. Reproducibility --------------- Reproducible. System Configuration -------------------- AIO-SX, but should be observable in all configurations. Branch/Pull Time/Commit ----------------------- Branch `master`. Last Pass --------- N/A. Timestamp/Logs -------------- sysadmin@controller-0:~$ fm alarm-list +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ | UUID | Alarm ID | Reason Text | Entity ID | Management Affecting | Severity | Time Stamp | +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ | <uuid> | 800.001 | Storage Alarm Condition: HEALTH_WARN. Please check 'ceph -s' for more details. | cluster=<cluster-uuid> | True | warning | 2023-05-11T07:05:29.285634 | +--------------------------------------+----------+--------------------------------------------------------------------------------+----------------------------------------------+----------------------+----------+----------------------------+ sysadmin@controller-0:~$ ceph -s   cluster:     id: <uuid>     health: HEALTH_WARN             application not enabled on 2 pool(s)   services:     mon: 3 daemons, quorum controller-0,controller-1,compute-0 (age 12h)     mgr: controller-0(active, since 20h), standbys: controller-1     mds: kube-cephfs:1 {0=controller-0=up:active} 2 up:standby     osd: 2 osds: 2 up (since 20h), 2 in (since 22h)   data:     pools: 7 pools, 704 pgs     objects: 634 objects, 2.5 GiB     usage: 28 GiB used, 3.2 TiB / 3.3 TiB avail     pgs: 704 active+clean   io:     client: 440 KiB/s wr, 0 op/s rd, 49 op/s wr Test Activity ------------- Developer Testing. Workaround ---------- To work around this, one must manually enable the applications on pools: * ceph osd pool application enable [...]; (Check the output of `ceph health detail` for more information)
2023-05-31 19:59:57 OpenStack Infra starlingx: status In Progress Fix Released
2023-08-05 00:05:16 Ghada Khalil starlingx: importance Undecided Medium
2023-08-05 00:05:29 Ghada Khalil tags stx.9.0 stx.distro.openstack