Cinder Disable Cluster Configuration

Bug #2067475 reported by Yusuf Güngör
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
New
Undecided
Unassigned

Bug Description

Hi cinder team, is there a way to undo the cluster config database effects?

After enabling the cluster config, all existing volumes cluster_name column updated on database. If user want to remove the cluster config, db records not updated and this cause troubles like deleting existing volumes.

Even removing the cluster config, cluster definitions still exist and cinder-manage command can not remove them, it says backend hosts exist.

We have found a hack way like below to undo cluster config. Should kolla-ansible consider this scenario? Do you know any method to undo cluster config?

- remove the cluster config from cinder.conf and reconfigure

- disable and remove the all volume backends like below!
  - openstack volume service set --disable <all_volume_services>
  - cinder-manage service remove cinder-volume <all_volume_services> (run from any cinder container)

- remove the cluster defination via cinder-manage (run from any cinder container)
  - cinder-manage cluster list
  - cinder-manage cluster remove

- connect to the DB and update the existing volume records
  - "select id,cluster_name from cinder.volumes where cluster_name is not NULL;"
  - "update cinder.volumes set cluster_name=NULL where cluster_name is not NULL;"

After these operations we are able to undo cluster configuration.

For undo operation, is that approach is valid?

Users may need that undo operation because of the cluster config is general to all backend.

We are not certainly sure but read a comment like below:

> Also, notice this turns on clustering for all backends. I.e., enabling Ceph >suddenly makes other backends try active-active HA only to fail (see >https://review.opendev.org/c/openstack/kolla-ansible/+/847352). Also, other >backends than Ceph might benefit from this setting (netapp, pure). I suggest we >only document the way to get active-active HA and not force it on users at all. >WDYT?

The comment exist for an old kolla-ansible review which focused enable cluster config for rbd and abandoned:

cinder: start using active-active for rbd : https://review.opendev.org/c/openstack/kolla-ansible/+/763011

We have thought that, cluster config given under [default] section for cinder.conf and it activates the cluster config for all backends. If not all backends supporting the Active-Active then user may want to remove the cluster configuration. Removing the cluster config is not enough to undo the cluster config.

We also discussing this situation on kolla-ansible side: https://review.opendev.org/c/openstack/kolla-ansible/+/909974/comments/f0bc01c7_612b7fb2?tab=comments

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.