Cinder backup error using Erasure Code Pool

Bug #1903713 reported by Joao
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Medium
Ivan Kolodyazhny

Bug Description

Error when try to create a backup from volume type hdd_backup (this volume use Erasure Code).

Opentack Ussuri
ceph 15.2.5 octopus (stable)

Logs on cinder-volume

2020-11-05 10:30:56.566 10 INFO cinder.backup.manager [req-0508e63f-632c-4fea-a478-07b2ea68d5e5 b2255b93575f4f04b0246c3d2ad921e4 06be0789bc974d3ebbfedf5fa5b3f4b0 - default default] Create backup started, backup: 53ddc6d0-73a1-4d8f-8531-57ad15b0e5d0 volume: c0a52f14-60fb-4e60-af40-5f679dd46bcd.

Logs on cinder-backup

2020-11-05 10:30:57.312 594 INFO cinder.volume.manager [req-0508e63f-632c-4fea-a478-07b2ea68d5e5 b2255b93575f4f04b0246c3d2ad921e4 06be0789bc974d3ebbfedf5fa5b3f4b0 - default default] Initialize volume connection completed successfully.
2020-11-05 10:30:57.396 594 INFO cinder.volume.manager [req-0508e63f-632c-4fea-a478-07b2ea68d5e5 b2255b93575f4f04b0246c3d2ad921e4 06be0789bc974d3ebbfedf5fa5b3f4b0 - default default] Terminate volume connection completed successfully.

cinder.conf:

backup_driver: "cinder.backup.drivers.ceph.CephBackupDriver"
backup_ceph_conf: "/etc/ceph/cluster01_ceph.conf"
backup_ceph_user: cluster01-ceph
backup_ceph_pool: hdd_enterprise

ceph user:

[client.cluster01-ceph]
 key = my_key
 caps mon = "allow r"
 caps osd = "allow rwx pool=hdd_enterprise, allow rwx pool=hdd_standard, allow rwx pool=hdd_backup, allow rwx pool=cluster01_volume_ec_omap"

pool details:

'cluster01_volume_ec_omap' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 1977 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd

'hdd_enterprise' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 2206 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd

'hdd_backup' erasure profile hdd_backup size 4 min_size 3 crush_rule 6 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 1992 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 12288 application rbd

Revision history for this message
michael-mcaleer (mmcaleer) wrote :

Hi,

It's not clear what the error is here, can you include the exact error or exception message that you are seeing? It will also help if debug logs are included which show the error and subsequent actions that have led to that error.

Thanks.

tags: added: backup erasure error
Revision history for this message
Joao (jacpjr) wrote :

Hi Michael,

I cannot see any errors in cinder logs. Only an error message appears on the horizon.

For example:

$ openstack volume backup show c47c0680-bf58-4d81-99ea-3447ef3093a3
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | None |
| container | None |
| created_at | 2020-11-16T12:38:47.000000 |
| data_timestamp | 2020-11-16T12:38:47.000000 |
| description | |
| fail_reason | Error connecting to ceph cluster. |
| has_dependent_backups | False |
| id | c47c0680-bf58-4d81-99ea-3447ef3093a3 |
| is_incremental | False |
| name | hdd_backup |
| object_count | 0 |
| size | 1 |
| snapshot_id | None |
| status | error |
| updated_at | 2020-11-16T12:38:48.000000 |
| volume_id | 2f1a7195-b12a-40be-8f10-d1f403eda8d8 |
+-----------------------+--------------------------------------+

Despite the "Error connecting to ceph cluster" message, I can connect to ceph normally. The problem occurs only with a volume that uses Erasure Code and the backup cinder log does not show any error about it.

root@master01:~# kubectl logs -n openstack cinder-backup-5df75d59f7-njdkl | grep c47c0680
2020-11-16 12:38:47.392 10 INFO cinder.backup.manager [req-e6e1090c-c037-4818-a349-9ecdf6b74140 b2255b93575f4f04b0246c3d2ad921e4 06be0789bc974d3ebbfedf5fa5b3f4b0 - default default] Create backup started, backup: c47c0680-bf58-4d81-99ea-3447ef3093a3 volume: 2f1a7195-b12a-40be-8f10-d1f403eda8d8.

Regards

Ivan Kolodyazhny (e0ne)
Changed in cinder:
assignee: nobody → Ivan Kolodyazhny (e0ne)
Changed in cinder:
status: New → Triaged
importance: Undecided → Medium
Eric Harney (eharney)
Changed in cinder:
status: Triaged → Incomplete
Eric Harney (eharney)
Changed in cinder:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.