Cinder backup on ceph when ceph backend not on all cinder-volume

Bug #1752848 reported by kourosh vivan
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Expired
Undecided
Unassigned

Bug Description

Hi,

I had an issue we could use to improve cinder role:

2 groups of storages nodes:
 - groupA: cinder-volume + cinder backup + backend local storage
  -groupB: cinder-volume + cinder backup + backend ceph

Issue on groupA: /var/log/cinder/cinder-backup.log
ERROR oslo.service.loopingcall BackupDriverException: Backup driver reported an error: rados and rbd python libraries not found

Reason: /etc/ansible/roles/os_cinder/tasks/main.yml

 - name: Include ceph_client role
  include_role:
    name: ceph_client
  vars:
    openstack_service_system_user: "{{ cinder_system_user_name }}"
    openstack_service_venv_bin: "{{ cinder_bin }}"
  when:
    - "'cinder_volume' in group_names"
    - "cinder_backend_rbd_inuse | bool"
  tags:
    - ceph

cinder_backend_rbd_inuse: '{{ (cinder_backends|default("")|to_json).find("cinder.volume.drivers.rbd.RBDDriver") != -1 }}'

cinder_backend_rbd_inuse is false because groupA doesn't have a ceph backend

cinder_backend_rbd_inuse should check this var too:
   cinder_service_backup_driver: cinder.backup.drivers.ceph

Alternative fix is to set:
cinder_backend_rbd_inuse: True
in user_variables.yml

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

Could you drop your configuration (openstack_user_config/user_variables.yml) to make sure we understand properly?

Changed in openstack-ansible:
status: New → Incomplete
Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

(And how you set your group vars)

Revision history for this message
kourosh vivan (kourosh-vivan) wrote :

openstack_user_config.yml

storage_hosts:
  cmpa:
    ip: 100.66.0.33
    container_vars:
      cinder_backends:
        limit_container_types: cinder_volume
        localvolume-01:
          volume_backend_name: volumes_local
          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
          volume_group: localvolume-01
          iscsi_ip_address: "{{ cinder_storage_address }}"
        localvolume-02:
          volume_backend_name: volumes_local
          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
          volume_group: localvolume-02
          iscsi_ip_address: "{{ cinder_storage_address }}"
      cinder_storage_availability_zone: nova
      cinder_default_availability_zone: nova
  sto:
    ip: 100.66.0.39
    container_vars:
      cinder_backends:
        limit_container_types: cinder_volume
        volumes_ceph:
          volume_driver: cinder.volume.drivers.rbd.RBDDriver
          rbd_pool: volumes
          rbd_ceph_conf: /etc/ceph/ceph.conf
          rbd_flatten_volume_from_snapshot: 'false'
          rbd_max_clone_depth: 5
          rbd_store_chunk_size: 4
          rados_connect_timeout: -1
          volume_backend_name: volumes_ceph
          rbd_user: "{{ cinder_ceph_client }}"
          rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
      cinder_storage_availability_zone: nova
      cinder_default_availability_zone: nova

/etc/openstack_deploy/user_variables.yml

ceph_extra_components:
  - component: gnocchi_api
    package:
      - "{{ python_ceph_package }}"
    client:
      - '{{ gnocchi_ceph_client }}'
    service: '{{ ceph_gnocchi_service_names }}'
cinder_backend_rbd_inuse: True

cinder_service_backup_program_enabled: True

/etc/openstack_deploy/conf.d/cinder.yml
---

global_overrides:
  cinder_default_volume_type: volumes_ceph
  cinder_service_backup_driver: cinder.backup.drivers.ceph
  cinder_service_backup_ceph_user: cinder-backup
  cinder_service_backup_ceph_pool: backups

cat /etc/openstack_deploy/conf.d/ceph.yml
---

global_overrides:
  ceph_mon_host: "{{ groups['ceph-mon'][0] }}"
  ceph_mons: "{{ groups['ceph-mon'] }}"
  ceph_origin: distro
  ceph_stable_release: luminous
  cluster_network: 100.66.6.0/23
  debian_ceph_packages:
    - ceph
    - ceph-common
    - ceph-fuse
  monitor_interface: eth2
  openstack_config: True
  osd_group_name: ceph-osd
  osd_objectstore: bluestore
  osd_scenario: collocated
  public_network: 198.19.0.0/24

cmpa groups: compute_hosts network_hosts metering-compute_hosts storage_hosts
sto groups: ceph-mon_hosts ceph-osd_hosts image_hosts storage-infra_hosts storage_hosts

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for openstack-ansible because there has been no activity for 60 days.]

Changed in openstack-ansible:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.