local_gb in hypervisor statistics is wrong when use rbd

Bug #1429084 reported by kaka
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
New
Undecided
Unassigned

Bug Description

env:
  two compute node(node-1,node-2)
  node-4 as storage node
  use ceph
  Icehose

description:
#ceph -s

    cluster 97cbee3f-26dc-4f03-aa1c-e4af9xxxxx
     health HEALTH_OK
     monmap e5: 3 mons at {node-1=10.11.0.2:6789/0,node-2=10.11.0.6:6789/0,node-4=10.11.0.7:6789/0}, election epoch 30, quorum 0,1,2 node-1,node-2,node-4
     osdmap e153: 6 osds: 6 up, 6 in
      pgmap v2543: 1216 pgs, 6 pools, 2792 MB data, 404 objects
            5838 MB used, 11163 GB / 11168 GB avail
                1215 active+clean
                   1 active+clean+scrubbing
  client io 0 B/s rd, 2463 B/s wr, 1 op/s

above totoal storage size is “ 11163 GB / 11168 GB”

but when get it through nova API,result as follow:

{"hypervisor_statistics": {"count": 2, "error_vms": 0, "vcpus_used": 6, "total_vms": 5, "run_vms": 5, "local_gb_used": 0, "memory_mb": 241915, "current_workload": 0, "vcpus": 48, "running_vms": 5, "free_disk_gb": 22336, "stop_vms": 0, "disk_available_least": 22326, "local_gb": 22336, "free_ram_mb": 234747, "memory_mb_used": 7168}}

"disk_available_least" is 22326, "local_gb" is 22336

reason:

I hava two compute node, all nodes report itself resource to controller node, due to each compute node get disk info of ceph cluster through rados client, so we will have double “disk_available_least” and “local_gb”

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.