when migrating a volume ,the source volume deleted but allocated_capacity_gb not remove

Bug #1705611 reported by jingtao liang
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Opinion
Undecided
Unassigned

Bug Description

If deleting source/destination volume in a migration, skip quotas. why?It is very strange 。

destination pool’allocated_capacity_gb increase。But source pool’allocated_capacity_gb remain the same。

Revision history for this message
Gorka Eguileor (gorka) wrote :

If I remember correctly the quotas were already calculated at an early stage, and for example, migrating one volume from one backend to another will still leave you with just 1 volume used: 1 new volume on backend X - 1 delete volume on backend Y = 0 volume difference on Quotas.

Quotas and free space on backends are completely different things, so the decrease of storage on the pools will not need to be reflected on the quotas.

If you have a specific case where you see that the quotas after a migration are incorrect, please reopen the bug with the specific information.

Changed in cinder:
status: New → Invalid
Changed in cinder:
status: Invalid → Opinion
Revision history for this message
jingtao liang (liang-jingtao) wrote :
Download full text (4.3 KiB)

Tks gorka

In this case ,i create a 1GB volume in ceph2. And then use CLI "cinder get-pools --detail" see allocated_capacity_gb as follows:

[root@CG_CEPH ~(keystone_admin)]# cinder get-pools --detail
+-----------------------+----------------------------+
| Property | Value |
+-----------------------+----------------------------+
| allocated_capacity_gb | 2 |
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18511.51 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph#ceph |
| pool_name | ceph |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-22T09:52:53.998642 |
| total_capacity_gb | 18512.04 |
| vendor_name | Open Source |
| volume_backend_name | ceph |
+-----------------------+----------------------------+
+-----------------------+----------------------------+
| Property | Value |
+-----------------------+----------------------------+
| allocated_capacity_gb | 2 |
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18511.51 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph2#ceph2 |
| pool_name | ceph2 |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-22T09:53:31.180495 |
| total_capacity_gb | 18512.51 |
| vendor_name | Open Source |
| volume_backend_name | ceph2 |
+-----------------------+----------------------------+

second step:

After migrating volume to ceph. use CLI "cinder get-pools --detail" see allocated_capacity_gb as follows:

[root@CG_CEPH ~(keystone_admin)]# cinder get-pools --detail
+-----------------------+----------------------------+
| Property | Value |
+-----------------------+----------------------------+
| allocated_capacity_gb | 3 |
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18510.02 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph#ceph |
| pool_name | ceph |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-22T09:57:54.111857 |
| total_capacity_gb | 18511.55 |
| vend...

Read more...

Revision history for this message
Gorka Eguileor (gorka) wrote :

OK, I have checked this and the problem is in the way the RBD driver is reporting the data back to the Scheduler.

I have created 2 new bugs [1][2] to report the issues and the problem is caused by RBD not reporting allocated_capacity_gb to correct the internal calculations of the scheduler [1].

[1] https://bugs.launchpad.net/cinder/+bug/1706057
[2] https://bugs.launchpad.net/cinder/+bug/1706060

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.