Comment 13 for bug 1245909

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/54833
Committed: http://github.com/openstack/cinder/commit/d72914f739b1467ad849dd47fddd321965fed928
Submitter: Jenkins
Branch: master

commit d72914f739b1467ad849dd47fddd321965fed928
Author: Jon Bernard <email address hidden>
Date: Thu Nov 21 17:58:13 2013 -0500

    LVM: Create thin pools of adequate size

    Thin pools in LVM are quite different from volume groups or logical
    volumes and their differences must be taken into account when providing
    thin LVM support in Cinder.

    When you create a thin pool, LVM actually creates 4 block devices. You
    can see this after thin pool creation with the following command:

        $ dmsetup ls

        volumes--1-volumes--1--pool (253:4)
        volumes--1-volumes--1--pool-tpool (253:3)
        volumes--1-volumes--1--pool_tdata (253:2)
        volumes--1-volumes--1--pool_tmeta (253:1)

    In the above command, a thin pool named 'volumes-1-pool' was created in
    the 'volumes-1' volume group. Despite this, the 'lvs' command will only
    show one logical volume for the thin pool, which can be misleading if
    you aren't aware of how thin pools are implemented.

    When you create a thin pool, you specify on the command line a size for
    the pool. LVM will interpret this size as the amount of space requested
    to store data blocks only. In order to allow volume sharing and
    snapshots, some amount of metadata must be reserved in addition to the
    data request. This amount is calculated by LVM internally and varies
    depending on volume size and chunksize. This is why one cannot simply
    allocate 100% of a volume group to a thin pool - there must be some
    remaining space for metadata or you will not be able to create volumes
    and snapshots that are pool-backed.

    This patch allocates 95% of a volume group's free space to the thin
    pool. By doing this, we allow LVM to successfully allocate a region for
    metadata. Additionally, any free space remaining will by dynamically
    used by either data or metadata if capacity should become scarce.

    The 95/5 split seems like a sane default. This split can easily (and
    probably should) be made user-configurable in the future if the user
    expects an abnormal amount of volume sharing.

    Change-Id: Id461445780c1574db316ede0c0194736e71640d0
    Closes-Bug: #1245909