RBD: Implement v2.1 replication
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Medium
|
Sofia Enriquez |
Bug Description
https:/
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/cinder" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to.
commit f81d8a37debce61
Author: Jon Bernard <email address hidden>
Date: Thu Aug 4 14:41:44 2016 -0400
RBD: Implement v2.1 replication
This patch implements v2.1 replication in the RBD driver. A single ceph
backend can support both replicated and non-replicated volumes using
volume types. For replicated volumes, both clusters are expected to be
configured with rbd-mirror with keys in place and image mirroring
enabled on the pool. The RBD driver will enable replication per-volume
if the volume type requests it.
On failover, each replicated volume is promoted to primary on the
secondary cluster and new connection requests will receive connection
information for the volume on the secondary cluster. At the time of
writing, failback is not supported by Cinder and requires admin
intervention to reach a per-failover state.
Non replicated volumes will be set to error status to reflect that they
are not available, and the previous status will be stored in the
replication
There are two configuration pieces required to make this work:
1. A volume type that enables replication:
$ cinder type-create replicated
$ cinder type-key replicated set volume_
$ cinder type-key replicated set replication_
2. A secondary backend defined in cinder.conf:
[ceph]
...
The only required parameter is backend_id, as conf and user have
defaults. Conf defaults to /etc/ceph/
to rbd_user or cinder if that one is None.
We also have a new configuration option for the RBD driver called
replication
the promotion/demotion of a single volume.
We try to do a clean failover for cases where there is still
connectivity with the primary cluster, so we'll try to do a demotion of
the original images, and when one of the demotions fails we just assume
that all of the other demotions will fail as well (the cluster is not
accesible) so we'll do a forceful promotion for those images.
DocImpact
Implements: blueprint rbd-replication
Co-Authored-By: Gorka Eguileor <email address hidden>
Change-Id: I58c38fe11014aa
As far as I can see we already have the documentation: https:/ /docs.openstack .org/cinder/ latest/ contributor/ replication. html
I'm going to close this.