Comment 1 for bug 1578036

Revision history for this message
dob (nnex) wrote : Re: backup to 2nd ceph cluster

I try to debug those error.

ceph.py at 859 line:
do_full_backup = False
        if self._file_is_rbd(volume_file):
            # If volume an RBD, attempt incremental backup.
            try:
                self._backup_rbd(backup_id, volume_id, volume_file,
                                 volume_name, length)
            except exception.BackupRBDOperationFailed:
                LOG.debug("Forcing full backup of volume %s.", volume_id)
                do_full_backup = True
        else:
            do_full_backup = True

        if do_full_backup:
            self._full_backup(backup_id, volume_id, volume_file,
                              volume_name, length)

but something goes wrong and function _file_is_rbd did return FALSE.

let's see to '_file_is_rbd' function in ceph.py at 683 line:
def _file_is_rbd(self, volume_file):
        """Returns True if the volume_file is actually an RBD image."""
        return hasattr(volume_file, 'rbd_image')

It mean that attribute 'rbd_image' did not assigned.

'rbd_image' attribute belong to class 'RBDImageIOWrapper' at cinder/volume/drivers/rbd.py.

I try print out contents 'volume_file' array:
['__abs
tractmethods__', '__class__', '__delattr__', '__doc__', '__enter__', '__exit__', '__format__', '__getattribute__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__re
duce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_abc_cache', '_abc_negative_cache', '_abc_negative_cache_version', '_abc_registry', '_checkClosed', '_checkReadable', '_checkSe
ekable', '_checkWritable', '_inc_offset', 'close', 'closed', 'fileno', 'flush', 'isatty', 'next', 'read', 'readable', 'readall', 'readline', 'readlines', 'seek', 'seekable', 'tell', 'truncate', 'writable', 'write
', 'writelines']'

those cells of array is in class RBDVolumeIOWrapper at os_brick/initiator/linuxrbd.py

In sum cinder-backup didn't identify image as rbd.

my cinder.conf at block node:
[DEFAULT]
debug = true
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.30.17.21
enabled_backends = lvm,rbd
glance_host = controller
control_exchange = cinder
notification_driver = messagingv2
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/bak.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
[database]
connection = mysql://cinder:XXXXXXX@controller/cinder
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = XXXXXXXXX
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = XXXXXXX
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = os-volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = XXXXXXX
volume_backend_name = volume_ceph
[oslo_concurrency]
lock_path = /var/lock/cinder

and my volumes:
root@controller:/var/log# cinder list
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| 8a554a85-a49b-45c4-9f83-a40a45f0c544 | available | test2 | 1 | rbd | false | |
| dea5342c-4625-40fc-ab8f-d147b2d224b2 | available | test3 | 1 | rbd | false | |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
root@controller:/var/log# cinder type-list
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 903bc236-f612-4253-9763-d7154987fe66 | rbd | | True |
| cae9eee2-fbad-4082-9635-e48788a2f4e1 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
root@controller:/var/log# cinder type-show 903bc236-f612-4253-9763-d7154987fe66
+---------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------+--------------------------------------+
| description | |
| extra_specs | {} |
| id | 903bc236-f612-4253-9763-d7154987fe66 |
| is_public | True |
| name | rbd |
| os-volume-type-access:is_public | True |
| qos_specs_id | None |
+---------------------------------+--------------------------------------+
cinder service-list
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | block1 | nova | enabled | up | 2016-05-19T06:52:27.000000 | - |
| cinder-scheduler | controller | nova | enabled | up | 2016-05-19T06:52:44.000000 | - |
| cinder-volume | block1@ceph | nova | enabled | down | 2016-05-19T05:36:40.000000 | - |
| cinder-volume | block1@lvm | nova | enabled | up | 2016-05-19T06:52:24.000000 | - |
| cinder-volume | block1@rbd | nova | enabled | up | 2016-05-19T06:52:23.000000 | - |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+