openstack-ansible-ops: "shrinking" machines00 fails due to "insufficient free space"

Bug #1781823 reported by Corey Wright
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Fix Released
Undecided
Corey Wright

Bug Description

tl;dr commit 875fa96fb871fc0061215cafa093f35cab01c4f3 / change Ief0040f638f0d3570557ac76fd5e0a8aee80df8d overlooked the default_container_tech == 'lxc' case, so tweak it to make it nspawn-only.

"Shrinking" the machines00 LV to 8 GiB fails due to "insufficient free space" when default_container_tech == 'lxc', where machines00 is between 4096 and 8192 MB (ie it's never "shrinking", but always "enlarging"; see openstack-ansible-ops/multi-node-aio/playbooks/pxe/configs/debian/vm.config.j2), and there's no free space in the VG for it to expand until the much larger (eg 60 GiB) lxc00 LV is deleted.

<error>
TASK [Shrink machines00 mount] *************************************************
task path: /root/openstack-ansible-ops/multi-node-aio/playbooks/deploy-vms.yml:268
fatal: [cinder1]: FAILED! => {"changed": false, "err": " Insufficient free space: 1072 extents needed, but only 0 available\n", "msg": "Unable to resize machines00 to 8192m", "rc": 5}
fatal: [cinder2]: FAILED! => {"changed": false, "err": " Insufficient free space: 1072 extents needed, but only 0 available\n", "msg": "Unable to resize machines00 to 8192m", "rc": 5}
fatal: [swift1]: FAILED! => {"changed": false, "err": " Insufficient free space: 1072 extents needed, but only 0 available\n", "msg": "Unable to resize machines00 to 8192m", "rc": 5}
fatal: [swift2]: FAILED! => {"changed": false, "err": " Insufficient free space: 1072 extents needed, but only 0 available\n", "msg": "Unable to resize machines00 to 8192m", "rc": 5}
fatal: [swift3]: FAILED! => {"changed": false, "err": " Insufficient free space: 1072 extents needed, but only 0 available\n", "msg": "Unable to resize machines00 to 8192m", "rc": 5}
        to retry, use: --limit @/root/openstack-ansible-ops/multi-node-aio/playbooks/site.retry
</error>

Revision history for this message
Corey Wright (coreywright) wrote :

The previous behavior (prior to commit 875fa96f) didn't even consider machines00, so maybe ignore resizing machines00 except for the nspawn case.

Of course maybe the best solution is to delete machines00 instead of just ignoring it, at least when default_container_tech == 'lxc', but I'm conservative and sticking to the previous behavior (ie only delete lxc00).

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-ops (master)

Fix proposed to branch: master
Review: https://review.openstack.org/582808

Changed in openstack-ansible:
assignee: nobody → Corey Wright (coreywright)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-ops (master)

Reviewed: https://review.openstack.org/582808
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-ops/commit/?id=f21bc666710fc83c1c14c616beb699a915a6ab2c
Submitter: Zuul
Branch: master

commit f21bc666710fc83c1c14c616beb699a915a6ab2c
Author: Corey Wright <email address hidden>
Date: Sun Jul 15 15:05:12 2018 -0500

    mnaio: Only resize Swift & Cinder machines00 LV when using nspawn

    Commit 875fa96f / change-id Ief0040f6 unintentionally tries to enlarge
    the "machines00" LV when LXC is the default container technology which
    fails due to the Debian automated installation having assigned all the
    space within the associated "vmvg00" VG.

    As the intention of the aforementioned commit was to apply when
    systemd-nspawn was used, codify that explicitly in a `when:` condition
    on the problematic Ansible task.

    Change-Id: I56ec1290d71d0d09db447e347d7d55432d9b81c6
    Signed-off-by: Corey Wright <email address hidden>
    Closes-Bug: #1781823

Changed in openstack-ansible:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.