Comment 14 for bug 1828262

Revision history for this message
Tingjie Chen (silverhandy) wrote :

The duplicated LP: 1827119 has fix released with patch: https://review.opendev.org/#/c/677424/ which merged in Sep 23th, 2019.
I have tried with 20191120T023000Z to verify the issue, but cannot reproduce, since set ceph_mon_gib=23 cannot return sucess because the available space in compute-0 have only 1 GB growth.

@Wendy, can you re-check the issue with latest image?

Following is the print information from my evaluation:
[sysadmin@controller-0 ~(keystone_admin)]$ system ceph-mon-list
+--------------------------------------+--------------+--------------+------------+------+
| uuid | ceph_mon_gib | hostname | state | task |
+--------------------------------------+--------------+--------------+------------+------+
| 2e78c9e8-e6b8-4f79-b498-92de29538c24 | 20 | controller-0 | configured | None |
| 372658d0-0a44-465a-9d8f-12d6dba6b38e | 20 | controller-1 | configured | None |
| 53a8dce1-e830-4514-97f2-652a9ec5ab9f | 20 | compute-0 | configured | None |
+--------------------------------------+--------------+--------------+---------c

$ system ceph-mon-modify controller-0 ceph_mon_gib=40
Node: compute-0 Total target growth size 20 GiB for database (doubled for upgrades), glance, scratch, backup, extension and ceph-mon exceeds growth limit of 1 GiB.

$ system ceph-mon-modify controller-0 ceph_mon_gib=23
Node: compute-0 Total target growth size 3 GiB for database (doubled for upgrades), glance, scratch, backup, extension and ceph-mon exceeds growth limit of 1 GiB.

$ system ceph-mon-modify controller-0 ceph_mon_gib=21
+--------------------------------------+--------------+--------------+------------+------+
| uuid | ceph_mon_gib | hostname | state | task |
+--------------------------------------+--------------+--------------+------------+------+
| 2e78c9e8-e6b8-4f79-b498-92de29538c24 | 21 | controller-0 | configured | None |
| 372658d0-0a44-465a-9d8f-12d6dba6b38e | 21 | controller-1 | configured | None |
| 53a8dce1-e830-4514-97f2-652a9ec5ab9f | 21 | compute-0 | configured | None |
+--------------------------------------+--------------+--------------+------------+------+

NOTE: ceph_mon_gib for both controllers are changed.

System configuration has changed.
please follow the administrator guide to complete configuring system.

# After compute-0 lock and unlock, the system host-list shows normally.

[sysadmin@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | worker | unlocked | enabled | available |
| 4 | compute-1 | worker | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+

# ssh to compute-0 and check:
compute-0:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 500M 0 part /boot
├─sda3 8:3 0 69G 0 part
│ ├─cgts--vg-scratch--lv 253:1 0 3.9G 0 lvm /scratch
│ ├─cgts--vg-log--lv 253:2 0 3.9G 0 lvm /var/log
│ ├─cgts--vg-kubelet--lv 253:3 0 10G 0 lvm /var/lib/kubelet
│ ├─cgts--vg-ceph--mon--lv 253:4 0 21G 0 lvm /var/lib/ceph/mon
│ └─cgts--vg-docker--lv 253:5 0 30G 0 lvm /var/lib/docker
├─sda4 8:4 0 19.5G 0 part /
├─sda5 8:5 0 10G 0 part
│ └─nova--local-instances_lv 253:0 0 10G 0 lvm /var/lib/nova/instances
├─sda6 8:6 0 30G 0 part
├─sda7 8:7 0 20G 0 part
└─sda8 8:8 0 20G 0 part
sdb 8:16 0 30G 0 disk