Juju status incorrectly notes the number of units for OSDs

Bug #1987548 reported by Ponnuvel Palaniyappan
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
New
Undecided
Unassigned

Bug Description

Here's a simple bundle:
```
series: focal
applications:
  ceph-mon:
    charm: ceph-mon
    channel: pacific/stable
    revision: 113
    num_units: 1
    to:
    - "0"
    options:
      expected-osd-count: 3
      loglevel: 1
      monitor-count: 1
      monitor-secret: AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==
      source: cloud:focal-xena
    constraints: arch=amd64 mem=2048
  ceph-osd:
    charm: ceph-osd
    channel: pacific/stable
    revision: 537
    num_units: 3
    to:
    - "1"
    - "2"
    - "3"
    options:
      loglevel: 1
      osd-devices: ""
      source: cloud:focal-xena
    constraints: arch=amd64 mem=1024
    storage:
      bluestore-db: loop,0,1024
      bluestore-wal: loop,0,1024
      osd-devices: cinder,1,10240
      osd-journals: loop,0,1024
machines:
  "0":
    constraints: arch=amd64 mem=2048
  "1":
    constraints: arch=amd64 mem=1024
  "2":
    constraints: arch=amd64 mem=1024
  "3":
    constraints: arch=amd64 mem=1024
relations:
- - ceph-mon:osd
  - ceph-osd:mon
```

When deployed, `juju status` reports incorrect number of OSDs in the App section:
```
Model Controller Cloud/Region Version SLA Timestamp
pacific con282 stsstack/stsstack 2.9.31 unsupported 16:33:33Z

App Version Status Scale Charm Channel Rev Exposed Message
ceph-mon 16.2.9 active 1 ceph-mon pacific/stable 113 no Unit is ready and clustered
ceph-osd 16.2.9 active 3 ceph-osd pacific/stable 537 no Unit is ready (1 OSD)

Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 0 10.5.4.21 Unit is ready and clustered
ceph-osd/0* active idle 1 10.5.1.228 Unit is ready (1 OSD)
ceph-osd/1 active idle 2 10.5.2.143 Unit is ready (1 OSD)
ceph-osd/2 active idle 3 10.5.3.0 Unit is ready (1 OSD)

Machine State Address Inst id Series AZ Message
0 started 10.5.4.21 43f50f78-3dd5-4faa-9aab-ce1807602711 focal nova ACTIVE
1 started 10.5.1.228 89f6120b-a06e-42eb-a9a2-ee3ed3fbc0b6 focal nova ACTIVE
2 started 10.5.2.143 98623721-73ae-4c8a-9afa-9742c5647525 focal nova ACTIVE
3 started 10.5.3.0 be456861-dae1-41e9-9cbe-336b126c3730 focal nova ACTIVE
```

i.e. there are 3 OSDs but it reports:
```
ceph-osd 16.2.9 active 3 ceph-osd pacific/stable 537 no Unit is ready (1 OSD)
```

The individual units report correctly. I think the message simply needs to drop the "(<num> OSD)" part from the App section.

Tags: seg
tags: added: seg
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.