Storage Alarm Condition: HEALTH_WARN on Virtual Standard configuration

Bug #1927824 reported by Alexandru Dimofte
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
New
Low
Unassigned

Bug Description

Brief Description
-----------------
800.001 Storage Alarm Condition: HEALTH_WARN. Please check 'ceph -s' for more details.
This alarm was observed on Virtual standard configuration.

Severity
--------
<Major: System/Feature is usable but degraded>

Steps to Reproduce
------------------
On our side this issue was observed during sanity execution, after installation.

Expected Behavior
------------------
This alarm shouldn't be visible.

Actual Behavior
----------------
[sysadmin@controller-1 ~(keystone_admin)]$ fm alarm-list
+-------+------------------------------------------------------------------------+--------------------------------------+----------+----------------+
| Alarm | Reason Text | Entity ID | Severity | Time Stamp |
| ID | | | | |
+-------+------------------------------------------------------------------------+--------------------------------------+----------+----------------+
| 800. | Storage Alarm Condition: HEALTH_WARN. Please check 'ceph -s' for more | cluster=bd3e9c46-34f2-4050-acda- | warning | 2021-05-08T11: |
| 001 | details. | 0aa3e6231620 | | 19:09.384929 |
| | | | | |
| 200. | compute-0 was administratively locked to take it out-of-service. | host=compute-0 | warning | 2021-05-08T11: |
| 001 | | | | 13:08.700490 |
| | | | | |
+-------+------------------------------------------------------------------------+--------------------------------------+----------+----------------+
[sysadmin@controller-1 ~(keystone_admin)]$ ceph -s
  cluster:
    id: bd3e9c46-34f2-4050-acda-0aa3e6231620
    health: HEALTH_WARN
            1/3 mons down, quorum controller-0,controller-1

  services:
    mon: 3 daemons, quorum controller-0,controller-1, out of quorum: compute-0
    mgr: controller-1(active), standbys: controller-0
    mds: kube-cephfs-1/1/1 up {0=controller-1=up:active}, 1 up:standby
    osd: 2 osds: 2 up, 2 in

  data:
    pools: 7 pools, 1472 pgs
    objects: 369 objects, 1.0 GiB
    usage: 2.2 GiB used, 496 GiB / 498 GiB avail
    pgs: 1472 active+clean

  io:
    client: 3.8 KiB/s rd, 458 KiB/s wr, 4 op/s rd, 85 op/s wr

Reproducibility
---------------
Till now I saw this only once on Virtual Standard configuration.

System Configuration
--------------------
Multi-node system

Branch/Pull Time/Commit
-----------------------
master

Last Pass
---------
20210505T161307Z

Timestamp/Logs
--------------
will be attached

Test Activity
-------------
Sanity

Workaround
----------
-

Tags: stx.storage
Revision history for this message
Alexandru Dimofte (adimofte) wrote :
Ghada Khalil (gkhalil)
tags: added: stx.storage
Revision history for this message
Ghada Khalil (gkhalil) wrote :

screening: marking as low priority due to lack of activity

Changed in starlingx:
importance: Undecided → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.