Comment 6 for bug 1922138

Revision history for this message
Mario Chirinos (mario-chirinos) wrote :

@Aurelien thanks I didn't know I had to set the bug to new, I will do it from now on.

I dindt change the osd-devices configuration I used the default configuration and added an other ceph unit (130TB) after deployment with:

juju add-unit ceph-osd
juju run-action --wait ceph-osd/3 add-disk osd-devices=/dev/sdc.

the space available in each unit is
geoint@maas:~$ juju ssh 0
ubuntu@PowerEdge-9R0DH13:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 3.0M 13G 1% /run
/dev/sda2 274G 49G 212G 19% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop2 56M 56M 0 100% /snap/core18/1988
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop7 73M 73M 0 100% /snap/lxd/19766
/dev/loop4 69M 69M 0 100% /snap/lxd/19823
/dev/loop6 33M 33M 0 100% /snap/snapd/11402
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop8 33M 33M 0 100% /snap/snapd/11588
tmpfs 13G 0 13G 0% /run/user/1000

geoint@maas:~$ juju ssh 1
ubuntu@PowerEdge-9R0FH13:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 3.1M 13G 1% /run
/dev/sda2 274G 45G 215G 18% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop0 56M 56M 0 100% /snap/core18/1988
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop7 73M 73M 0 100% /snap/lxd/19766
/dev/loop8 69M 69M 0 100% /snap/lxd/19823
/dev/loop6 33M 33M 0 100% /snap/snapd/11402
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop9 33M 33M 0 100% /snap/snapd/11588
tmpfs 13G 0 13G 0% /run/user/1000

geoint@maas:~$ juju ssh 2
ubuntu@PowerEdge-9R0CH13:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 3.1M 13G 1% /run
/dev/sda2 274G 52G 208G 20% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop0 56M 56M 0 100% /snap/core18/1988
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop7 73M 73M 0 100% /snap/lxd/19766
/dev/loop4 69M 69M 0 100% /snap/lxd/19823
/dev/loop6 33M 33M 0 100% /snap/snapd/11402
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop8 33M 33M 0 100% /snap/snapd/11588
tmpfs 13G 0 13G 0% /run/user/1000

geoint@maas:~$ juju ssh 3
ubuntu@NX3240-4XF2613:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.6G 0 7.6G 0% /dev
tmpfs 1.6G 1.6M 1.6G 1% /run
/dev/sda2 137G 8.2G 122G 7% /
tmpfs 7.6G 0 7.6G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.6G 0 7.6G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop0 56M 56M 0 100% /snap/core18/1988
/dev/loop1 70M 70M 0 100% /snap/lxd/19188
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop4 71M 71M 0 100% /snap/lxd/19647
/dev/loop5 33M 33M 0 100% /snap/snapd/11402
/dev/loop2 56M 56M 0 100% /snap/core18/1997
/dev/loop6 33M 33M 0 100% /snap/snapd/11588
tmpfs 1.6G 0 1.6G 0% /run/user/1000

ubuntu@NX3240-4XF2613:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.6G 0 7.6G 0% /dev
tmpfs 1.6G 1.6M 1.6G 1% /run
/dev/sda2 137G 8.2G 122G 7% /
tmpfs 7.6G 0 7.6G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.6G 0 7.6G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop0 56M 56M 0 100% /snap/core18/1988
/dev/loop1 70M 70M 0 100% /snap/lxd/19188
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop4 71M 71M 0 100% /snap/lxd/19647
/dev/loop5 33M 33M 0 100% /snap/snapd/11402
/dev/loop2 56M 56M 0 100% /snap/core18/1997
/dev/loop6 33M 33M 0 100% /snap/snapd/11588
tmpfs 1.6G 0 1.6G 0% /run/user/1000

and ceph status is
geoint@maas:~$ juju ssh ceph-mon/2 sudo ceph status
  cluster:
    id: 457c2212-7412-11eb-bb51-ffc0149f2d99
    health: HEALTH_ERR
            1 full osd(s)
            18 pool(s) full

  services:
    mon: 3 daemons, quorum juju-191a8d-1-lxd-0,juju-191a8d-0-lxd-0,juju-191a8d-2-lxd-0 (age 5w)
    mgr: juju-191a8d-0-lxd-0(active, since 6w), standbys: juju-191a8d-1-lxd-0, juju-191a8d-2-lxd-0
    osd: 5 osds: 5 up (since 6w), 5 in (since 6w); 51 remapped pgs
    rgw: 1 daemon active (juju-191a8d-0-lxd-1)

  task status:

  data:
    pools: 18 pools, 99 pgs
    objects: 488.92k objects, 1.8 TiB
    usage: 5.3 TiB used, 118 TiB / 123 TiB avail
    pgs: 290276/1466763 objects misplaced (19.790%)
             51 active+clean+remapped
             48 active+clean

Connection to 192.168.221.12 closed.