Tks gorka
In this case ,i create a 1GB volume in ceph2. And then use CLI "cinder get-pools --detail" see allocated_capacity_gb as follows:
[root@CG_CEPH ~(keystone_admin)]# cinder get-pools --detail +-----------------------+----------------------------+ | Property | Value | +-----------------------+----------------------------+ | allocated_capacity_gb | 2 | | driver_version | 1.2.0 | | filter_function | None | | free_capacity_gb | 18511.51 | | goodness_function | None | | multiattach | True | | name | cinder@ceph#ceph | | pool_name | ceph | | reserved_percentage | 0 | | storage_protocol | ceph | | timestamp | 2017-07-22T09:52:53.998642 | | total_capacity_gb | 18512.04 | | vendor_name | Open Source | | volume_backend_name | ceph | +-----------------------+----------------------------+ +-----------------------+----------------------------+ | Property | Value | +-----------------------+----------------------------+ | allocated_capacity_gb | 2 | | driver_version | 1.2.0 | | filter_function | None | | free_capacity_gb | 18511.51 | | goodness_function | None | | multiattach | True | | name | cinder@ceph2#ceph2 | | pool_name | ceph2 | | reserved_percentage | 0 | | storage_protocol | ceph | | timestamp | 2017-07-22T09:53:31.180495 | | total_capacity_gb | 18512.51 | | vendor_name | Open Source | | volume_backend_name | ceph2 | +-----------------------+----------------------------+
second step:
After migrating volume to ceph. use CLI "cinder get-pools --detail" see allocated_capacity_gb as follows:
[root@CG_CEPH ~(keystone_admin)]# cinder get-pools --detail +-----------------------+----------------------------+ | Property | Value | +-----------------------+----------------------------+ | allocated_capacity_gb | 3 | | driver_version | 1.2.0 | | filter_function | None | | free_capacity_gb | 18510.02 | | goodness_function | None | | multiattach | True | | name | cinder@ceph#ceph | | pool_name | ceph | | reserved_percentage | 0 | | storage_protocol | ceph | | timestamp | 2017-07-22T09:57:54.111857 | | total_capacity_gb | 18511.55 | | vendor_name | Open Source | | volume_backend_name | ceph | +-----------------------+----------------------------+ +-----------------------+----------------------------+ | Property | Value | +-----------------------+----------------------------+ | allocated_capacity_gb | 2 | | driver_version | 1.2.0 | | filter_function | None | | free_capacity_gb | 18510.02 | | goodness_function | None | | multiattach | True | | name | cinder@ceph2#ceph2 | | pool_name | ceph2 | | reserved_percentage | 0 | | storage_protocol | ceph | | timestamp | 2017-07-22T09:58:31.185126 | | total_capacity_gb | 18511.02 | | vendor_name | Open Source | | volume_backend_name | ceph2 | +-----------------------+----------------------------+
the allocated_capacity_gb in ceph increased but ceph2 remain the same.This confusing.So it maybe a question。
Tks gorka
In this case ,i create a 1GB volume in ceph2. And then use CLI "cinder get-pools --detail" see allocated_ capacity_ gb as follows:
[root@CG_CEPH ~(keystone_admin)]# cinder get-pools --detail ------- ------- ---+--- ------- ------- ------- ----+ ------- ------- ---+--- ------- ------- ------- ----+ capacity_ gb | 2 | 22T09:52: 53.998642 | ------- ------- ---+--- ------- ------- ------- ----+ ------- ------- ---+--- ------- ------- ------- ----+ ------- ------- ---+--- ------- ------- ------- ----+ capacity_ gb | 2 | 22T09:53: 31.180495 | ------- ------- ---+--- ------- ------- ------- ----+
+------
| Property | Value |
+------
| allocated_
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18511.51 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph#ceph |
| pool_name | ceph |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-
| total_capacity_gb | 18512.04 |
| vendor_name | Open Source |
| volume_backend_name | ceph |
+------
+------
| Property | Value |
+------
| allocated_
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18511.51 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph2#ceph2 |
| pool_name | ceph2 |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-
| total_capacity_gb | 18512.51 |
| vendor_name | Open Source |
| volume_backend_name | ceph2 |
+------
second step:
After migrating volume to ceph. use CLI "cinder get-pools --detail" see allocated_ capacity_ gb as follows:
[root@CG_CEPH ~(keystone_admin)]# cinder get-pools --detail ------- ------- ---+--- ------- ------- ------- ----+ ------- ------- ---+--- ------- ------- ------- ----+ capacity_ gb | 3 | 22T09:57: 54.111857 | ------- ------- ---+--- ------- ------- ------- ----+ ------- ------- ---+--- ------- ------- ------- ----+ ------- ------- ---+--- ------- ------- ------- ----+ capacity_ gb | 2 | 22T09:58: 31.185126 | ------- ------- ---+--- ------- ------- ------- ----+
+------
| Property | Value |
+------
| allocated_
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18510.02 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph#ceph |
| pool_name | ceph |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-
| total_capacity_gb | 18511.55 |
| vendor_name | Open Source |
| volume_backend_name | ceph |
+------
+------
| Property | Value |
+------
| allocated_
| driver_version | 1.2.0 |
| filter_function | None |
| free_capacity_gb | 18510.02 |
| goodness_function | None |
| multiattach | True |
| name | cinder@ceph2#ceph2 |
| pool_name | ceph2 |
| reserved_percentage | 0 |
| storage_protocol | ceph |
| timestamp | 2017-07-
| total_capacity_gb | 18511.02 |
| vendor_name | Open Source |
| volume_backend_name | ceph2 |
+------
the allocated_ capacity_ gb in ceph increased but ceph2 remain the same.This confusing.So it maybe a question。