Add support for erasure-coded pool backend

Bug #1863017 reported by Przemyslaw Hausman
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Ceph RADOS Gateway Charm
Fix Released
Wishlist
Unassigned
OpenStack Ceph-FS Charm
Fix Released
Wishlist
Unassigned
OpenStack Cinder-Ceph charm
Fix Released
Wishlist
Unassigned
OpenStack Glance Charm
Fix Released
Wishlist
Unassigned
OpenStack Nova Compute Charm
Fix Released
Wishlist
Unassigned

Bug Description

This is a feature request for support erasure-coded pools.

Erasure-coded pools require less storage space compared to replicated pools. Currently, cinder-ceph supports only replicated pools.

The goal is to set up erasure-coded pool as a backend for Cinder. The problem is that images can only be created in a replicated pool due to lack of omap support in a an erasure coded pool. To overcome this issue, one must create two pools:

1. Replicated metadata pool, e.g 'cinder-ceph-ec-metadata',
2. Erasure-coded data pool,e.g 'cinder-ceph-ec-data'.

With such setup, the following configuration should be rendered by the charm:

/etc/cinder/cinder.conf:

[cinder-ceph-ec-metadata]
volume_backend_name = cinder-ceph-ec-metadata
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = cinder-ceph-ec-metadata
rbd_user = cinder-ceph-ec-metadata
rbd_secret_uuid = <uuid>
rbd_ceph_conf = /var/lib/charm/cinder-ceph-ec-metadata/ceph.conf
[...]

/var/lib/charm/cinder-ceph-ec-metadata/ceph.conf:

[client.cinder-ceph-ec-metadata]
rbd default data pool = cinder-ceph-ec-data

Currently, rendering configuration to ceph.conf is not implemented.

For the initial implementation, cinder-ceph could support already existing, erasure-coded pools. As a next step, cinder-ceph could support creating metadata and data pools.

Changed in charm-cinder-ceph:
importance: Undecided → Wishlist
status: New → Confirmed
Revision history for this message
Márton Kiss (marton-kiss) wrote :

I would rather break up the request into two separate steps:
1, implement the missing "rbd default data pool" option into cinder-ceph's ceph conf, because at least it allows to use the erasure coding for pre-created data and metadata pools (external ceph clusters, or pools created as a post configuration step)
- add the rbd-default-data-pool variable to config.yaml
- render the following conditional part into ceph.conf:

[client]
rbd default data pool = {{ rbd_default_data_pool }}

In this case, the [client.{{poolname}}] entry is obsolote, because the cinder-ceph app defines a single ceph pool only. Multiple pools can be used by multiple cinder-ceph entries with different configurations. A properly configured cinder-ceph can result the following EC pool volume:

$ rbd ls cinder-ceph-ec-meta
volume-d6d12723-46f0-4e40-8941-90a754b59f51
$ rbd info cinder-ceph-ec-meta/volume-d6d12723-46f0-4e40-8941-90a754b59f51
rbd image 'volume-d6d12723-46f0-4e40-8941-90a754b59f51':
        size 1GiB in 256 objects
        order 22 (4MiB objects)
        data_pool: cinder-ceph-ec-data <--- the data pool name is here
        block_name_prefix: rbd_data.16.16da6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool <--- the data pool flag is there
        flags:
        create_timestamp: Thu Feb 20 17:35:20 2020

2, implement the proper EC pool creation for the cinder ceph charm. Basics seems to be already there for ceph-proxy, cinder-ceph, however for pool creation it calls the deprecated add_op_create_pool function:
https://github.com/openstack/charm-cinder-ceph/blob/master/hooks/cinder_hooks.py#L112

The add_op_create_pool *always* creates a replicated pool:
https://github.com/openstack/charm-cinder-ceph/blob/master/charmhelpers/contrib/storage/linux/ceph.py#L1219

So for the proper EC feature, additional config variables must be added to the cinder-ceph charm code, and properly invoke either the add_op_create_replicated_pool() or the add_op_create_erasure_pool() calls.

Revision history for this message
Ryan Beisner (1chb1n) wrote :

I think this is a large enough piece of work to warrant a design and specification conversation, per:
https://docs.openstack.org/charm-guide/latest/feature-specification.html

Revision history for this message
James Page (james-page) wrote :

Agree with @beisner - lets have a proper spec for this work so the design can be discussed and documented outside of a bug report.

Changed in charm-cinder-ceph:
status: Confirmed → Incomplete
Revision history for this message
James Page (james-page) wrote :

based on <conversations> the short term workaround solution should be via a 'config-flags' style configuration option in this charm - use of this for the EC pool configuration will be superceeded by the full EC support at a future date.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack cinder-ceph charm because there has been no activity for 60 days.]

Changed in charm-cinder-ceph:
status: Incomplete → Expired
James Page (james-page)
Changed in charm-cinder-ceph:
status: Expired → Triaged
James Page (james-page)
Changed in charm-ceph-radosgw:
status: New → Triaged
importance: Undecided → Wishlist
Changed in charm-glance:
status: New → Triaged
Changed in charm-nova-compute:
status: New → Triaged
Changed in charm-ceph-fs:
status: New → Triaged
importance: Undecided → Wishlist
Changed in charm-nova-compute:
importance: Undecided → Wishlist
Changed in charm-glance:
importance: Undecided → Wishlist
Changed in charm-ceph-fs:
milestone: none → 20.08
Changed in charm-ceph-radosgw:
milestone: none → 20.08
Changed in charm-cinder-ceph:
milestone: none → 20.08
Changed in charm-glance:
milestone: none → 20.08
Changed in charm-nova-compute:
milestone: none → 20.08
James Page (james-page)
Changed in charm-cinder-ceph:
milestone: 20.08 → none
Changed in charm-ceph-radosgw:
milestone: 20.08 → none
Changed in charm-glance:
milestone: 20.08 → none
Changed in charm-nova-compute:
milestone: 20.08 → none
Changed in charm-ceph-fs:
milestone: 20.08 → none
James Page (james-page)
Changed in charm-ceph-fs:
status: Triaged → Fix Committed
Changed in charm-ceph-radosgw:
status: Triaged → Fix Committed
Changed in charm-cinder-ceph:
status: Triaged → Fix Committed
Changed in charm-glance:
status: Triaged → Fix Committed
Changed in charm-nova-compute:
status: Triaged → Fix Committed
Changed in charm-ceph-fs:
milestone: none → 20.10
Changed in charm-ceph-radosgw:
milestone: none → 20.10
Changed in charm-cinder-ceph:
milestone: none → 20.10
Changed in charm-glance:
milestone: none → 20.10
Changed in charm-nova-compute:
milestone: none → 20.10
Changed in charm-cinder-ceph:
status: Fix Committed → Fix Released
Changed in charm-ceph-radosgw:
status: Fix Committed → Fix Released
Changed in charm-glance:
status: Fix Committed → Fix Released
Changed in charm-nova-compute:
status: Fix Committed → Fix Released
Changed in charm-ceph-fs:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.