Empty metadata in results

Bug #1202749 reported by Frédéric FAURE
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Ceilometer
Fix Released
High
Feilong Wang

Bug Description

I installed a new Devstack this week and I have empty metadata. I followed the tips in http://docs.openstack.org/developer/ceilometer/install/development.html for activating notifications in Nova and Cinder.

Here is an example of a sample I got when requesting on cpu_util meter (memory_mb, vcpus, root_gb, ... are empty):

[
    {
        "counter_name": "cpu_util",
        "user_id": "4790fbafad2e44dab37b1d7bfc36299b",
        "resource_id": "87acaca4-ae45-43ae-ac91-846d8d96a89b",
        "timestamp": "2013-07-17T15:43:31",
        "resource_metadata": {
            "cpu_number": "1",
            "ramdisk_id": "56f7f164-2db9-48cd-bf35-f2068439729f",
            "display_name": "test-fred-001",
            "name": "instance-00000001",
            "disk_gb": "",
            "kernel_id": "ed022041-3fdc-4465-b9d3-b46640c33e5d",
            "ephemeral_gb": "",
            "host": "eb5bc97aa612df408bd4d314e79d5c2bcf29bb59af41b8e32bcdd8d5",
            "memory_mb": "",
            "instance_type": "a7820591-99f7-44ff-862b-e49f0eee3893",
            "vcpus": "",
            "root_gb": "",
            "image_ref": "a9b5d9c3-2683-49ef-8eeb-23c6e66b6db3",
            "architecture": "",
            "os_type": "",
            "OS-EXT-AZ:availability_zone": "nova",
            "reservation_id": "",
            "image_ref_url": "http://10.0.2.15:8774/09cbbbd2b0674bcca6869202bac5f668/images/a9b5d9c3-2683-49ef-8eeb-23c6e66b6db3"
        },
        "source": "openstack",
        "counter_unit": "%",
        "counter_volume": 8.57762938230384,
        "project_id": "97f9a6aaa9d842fcab73797d3abb2f53",
        "message_id": "a2074f6e-eef7-11e2-90f0-080027880ca6",
        "counter_type": "gauge"
    }
]

Revision history for this message
John Tran (jtran) wrote :

Can you clarify if you're using trunk or a particular branch?

Revision history for this message
Frédéric FAURE (frederic-faure) wrote : Re: [Bug 1202749] Re: Empty metadata in results

I am using the master branch.
Le 19 juil. 2013 18:25, "John Tran" <email address hidden> a écrit :

> Can you clarify if you're using trunk or a particular branch?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1202749
>
> Title:
> Empty metadata in results
>
> Status in Ceilometer:
> New
>
> Bug description:
> I installed a new Devstack this week and I have empty metadata. I
> followed the tips in
> http://docs.openstack.org/developer/ceilometer/install/development.html
> for activating notifications in Nova and Cinder.
>
> Here is an example of a sample I got when requesting on cpu_util meter
> (memory_mb, vcpus, root_gb, ... are empty):
>
> [
> {
> "counter_name": "cpu_util",
> "user_id": "4790fbafad2e44dab37b1d7bfc36299b",
> "resource_id": "87acaca4-ae45-43ae-ac91-846d8d96a89b",
> "timestamp": "2013-07-17T15:43:31",
> "resource_metadata": {
> "cpu_number": "1",
> "ramdisk_id": "56f7f164-2db9-48cd-bf35-f2068439729f",
> "display_name": "test-fred-001",
> "name": "instance-00000001",
> "disk_gb": "",
> "kernel_id": "ed022041-3fdc-4465-b9d3-b46640c33e5d",
> "ephemeral_gb": "",
> "host":
> "eb5bc97aa612df408bd4d314e79d5c2bcf29bb59af41b8e32bcdd8d5",
> "memory_mb": "",
> "instance_type": "a7820591-99f7-44ff-862b-e49f0eee3893",
> "vcpus": "",
> "root_gb": "",
> "image_ref": "a9b5d9c3-2683-49ef-8eeb-23c6e66b6db3",
> "architecture": "",
> "os_type": "",
> "OS-EXT-AZ:availability_zone": "nova",
> "reservation_id": "",
> "image_ref_url": "
> http://10.0.2.15:8774/09cbbbd2b0674bcca6869202bac5f668/images/a9b5d9c3-2683-49ef-8eeb-23c6e66b6db3
> "
> },
> "source": "openstack",
> "counter_unit": "%",
> "counter_volume": 8.57762938230384,
> "project_id": "97f9a6aaa9d842fcab73797d3abb2f53",
> "message_id": "a2074f6e-eef7-11e2-90f0-080027880ca6",
> "counter_type": "gauge"
> }
> ]
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/ceilometer/+bug/1202749/+subscriptions
>

Julien Danjou (jdanjou)
Changed in ceilometer:
status: New → Triaged
importance: Undecided → High
Revision history for this message
Steve Vezina (steve.vezina) wrote :

Yeah it seem that the metadata get filled only when you destroy the instance

Revision history for this message
Frédéric FAURE (frederic-faure) wrote :

Here is the localrc conf file for my Devstack install if it could help:

-------------------------------------------------------------------
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,n-cauth,horizon,mysql,rabbit,sysstat,cinder,c-api,c-vol,c-sch,n-cond,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas,n-novnc,n-xvnc,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api,swift
EXTRA_OPTS=(notification_driver=nova.openstack.common.notifier.rabbit_notifier,ceilometer.compute.nova_notifier)
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=/os-data/swift
export no_proxy="localhost,127.0.0.1"

OFFLINE=True
-------------------------------------------------------------------

After, as I said in the description of the bug: "I followed the tips in http://docs.openstack.org/developer/ceilometer/install/development.html for activating notifications in Nova and Cinder."

Revision history for this message
Guangyu Suo (yugsuo) wrote :

Yeah, the same situation to /meters/cpu

suo@ustack:~$ curl -H "X-Auth-Token:$AUTH_TOKEN" http://10.96.24.95:8777/v2/meters/cpu | ~/devstack/jsontool.py
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 2033 100 2033 0 0 5254 0 --:--:-- --:--:-- --:--:-- 5266
[
  {
    "counter_name": "cpu",
    "user_id": "16e6e91ddbb54faf956d4a30320c3cd6",
    "resource_id": "61e466e6-840c-4e78-b2f0-85822d5299dc",
    "timestamp": "2013-07-30T08:47:01",
    "message_id": "9a0a3e6e-f8f4-11e2-ac8b-689423ac42ad",
    "source": "openstack",
    "counter_unit": "ns",
    "counter_volume": 121130000000.0,
    "project_id": "5943a2cff03a49069e7f3d6e8a7b1d92",
    "resource_metadata": {
      "cpu_number": "1",
      "ephemeral_gb": "",
      "display_name": "vm1",
      "name": "instance-00000001",
      "disk_gb": "",
      "kernel_id": "4685395a-9429-4d50-a98b-43efdb77aca8",
      "ramdisk_id": "c6c411d8-94d4-46cd-a450-5cf0b41cbe6c",
      "vcpus": "",
      "memory_mb": "",
      "instance_type": "1",
      "host": "642f859fb06c6a7cce426d4dcce168be535b1c122337550401a77dda",
      "root_gb": "",
      "image_ref": "d84b8535-4130-4198-9eb8-1f8946128b50",
      "architecture": "",
      "os_type": "",
      "OS-EXT-AZ:availability_zone": "nova",
      "reservation_id": "",
      "image_ref_url": "http://10.96.24.95:8774/ef8dea1f5af94d7680f63bcda9ce5aaa/images/d84b8535-4130-4198-9eb8-1f8946128b50"
    },
    "counter_type": "cumulative"
  }
]

Revision history for this message
Feilong Wang (flwang) wrote :

Based on current implement, all of these metrics are collected by Nova notifications. So it would be nice if you can post your nova.conf file, especially the notification part. And I will try to reproduce it in my env, will update the result.

Changed in ceilometer:
assignee: nobody → Fei Long Wang (flwang)
Revision history for this message
Guangyu Suo (yugsuo) wrote :

Hi, Fei Long:

The meters cpu and cpu_util use pollster plugin to collect data through nova_client(nova-api), I think the problem here is nova_client doesn't provide enough infomation by instance_get_all_by_host() or server.list(), am i right ?

Thanks !

Revision history for this message
Feilong Wang (flwang) wrote :

Thanks for the reminder. After reviewed the code, it seems both pollster and notification are trying to update these attributes. See: https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/notifications.py and https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/pollsters/util.py#L94

So I need to investigate it more deeper to make sure if there is a race condition issue.

Revision history for this message
Frédéric FAURE (frederic-faure) wrote :

Here is my /etc/nova/nova.conf file:

[DEFAULT]
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
service_quantum_metadata_proxy = True
linuxnet_interface_driver =
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
security_group_api = quantum
quantum_url = http://10.0.2.15:9696
quantum_admin_tenant_name = service
quantum_auth_strategy = keystone
quantum_admin_auth_url = http://10.0.2.15:35357/v2.0
quantum_admin_password = password
quantum_admin_username = neutron
network_api_class = nova.network.quantumv2.api.API
glance_api_servers = 10.0.2.15:9292
rabbit_password = password
rabbit_host = localhost
rpc_backend = nova.openstack.common.rpc.impl_kombu
ec2_dmz_host = 10.0.2.15
vncserver_proxyclient_address = 127.0.0.1
vncserver_listen = 127.0.0.1
vnc_enabled = true
xvpvncproxy_base_url = http://10.0.2.15:6081/console
novncproxy_base_url = http://10.0.2.15:6080/vnc_auto.html
notification_driver = nova.openstack.common.notifier.rabbit_notifier,ceilometer.compute.nova_notifier
notification_driver = nova.openstack.common.notifier.rabbit_notifier,ceilometer.compute.nova_notifier
notify_on_any_change = True
notify_on_state_change = vm_and_task_state
instance_usage_audit_period = hour
instance_usage_audit = True
logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s
logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s
logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s%(color)s] %(instance)s%(c
olor)s%(message)s
instances_path = /opt/stack/data/nova/instances
lock_path = /opt/stack/data/nova
state_path = /opt/stack/data/nova
volume_api_class = nova.volume.cinder.API
enabled_apis = ec2,osapi_compute,metadata
instance_name_template = instance-%08x
libvirt_cpu_mode = none
libvirt_type = qemu
sql_connection = mysql://root:password@localhost/nova?charset=utf8
my_ip = 10.0.2.15
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
s3_port = 3333
s3_host = 10.0.2.15
default_floating_pool = public
fixed_range =
force_dhcp_release = True
dhcpbridge_flagfile = /etc/nova/nova.conf
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
rootwrap_config = /etc/nova/rootwrap.conf
api_paste_config = /etc/nova/api-paste.ini
allow_resize_to_same_host = True
auth_strategy = keystone
debug = True
verbose = True

[osapi_v3]
enabled = True

[spice]
enabled = false
html5proxy_base_url = http://10.0.2.15:6082/spice_auto.html

Revision history for this message
Frédéric FAURE (frederic-faure) wrote :
Download full text (3.9 KiB)

I requested the resource corresponding to my instance and got a meters' links list which is smaller than this one http://docs.openstack.org/developer/ceilometer/measurements.html => It lacks all the "notification" meters.

If i try http://localhost:8777/v2/meters/memory?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23 (which is not in the returned meters' links list), I actually got [], idem for all the "notification" meters: memory, vcpus, disk.root.size and disk.ephemeral.size.

"pollster" and "both" meters are OK => filled with samples.

By the way "disk.read.request" & "disk.write.request": it lacks the "s" at the end of "request" in the doc "measurements.html".

The missing metadata are corresponding to the missing meters "memory_mb", "vcpus", "root_gb" and "ephemeral_gb" + "disk_gb" (?), "architecture", "os_type" and "reservation_id".

Terminating instance do NOT add neither meters nor metadata to the resource (I waited 20min).

---------------------------------------------------------

Here is the result of the request on the resource corresponding to my instance "test-fred-001":

[
    {
        "resource_id": "989615b4-aa18-4772-b9dc-2bf439664f23",
        "project_id": "7da74e1b45264cc1897a0efd4e9a0acb",
        "user_id": "1c4eb0efcdf144e187339e6f8ef2e8d9",
        "links": [
            {
                "href": "http://localhost:8777/v2/resources/989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "self"
            },
            {
                "href": "http://localhost:8777/v2/meters/cpu?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "cpu"
            },
            {
                "href": "http://localhost:8777/v2/meters/instance?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "instance"
            },
            {
                "href": "http://localhost:8777/v2/meters/instance:m1.nano?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "instance:m1.nano"
            },
            {
                "href": "http://localhost:8777/v2/meters/disk.read.requests?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "disk.read.requests"
            },
            {
                "href": "http://localhost:8777/v2/meters/disk.read.bytes?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "disk.read.bytes"
            },
            {
                "href": "http://localhost:8777/v2/meters/disk.write.requests?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "disk.write.requests"
            },
            {
                "href": "http://localhost:8777/v2/meters/disk.write.bytes?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "disk.write.bytes"
            },
            {
                "href": "http://localhost:8777/v2/meters/cpu_util?q.field=resource_id&q.value=989615b4-aa18-4772-b9dc-2bf439664f23",
                "rel": "cpu_util"
            }
        ],
        "metadata": {
            "ephemeral_gb": "",
        ...

Read more...

Feilong Wang (flwang)
Changed in ceilometer:
status: Triaged → In Progress
Revision history for this message
Feilong Wang (flwang) wrote :

I can recreate this issue. But my observation is different with Frédéric FAURE (frederic-faure). All the meters collected by notification can get value, "memory_mb", "vcpus", "root_gb", "ephemeral_gb", "disk_gb", "architecture", "os_type" and "reservation_id". However, the meters collected by pollster can't.

Based on my investigation, I think the root cause of why pollster based meters can't get some attributes is related to below code:
1. We defined some properties Ceilometer interested in
https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/pollsters/util.py#L27

2. We want copy them into metadata when we got the instance
    for name in INSTANCE_PROPERTIES:
           metadata[name] = getattr(instance, name, u'')
https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/pollsters/util.py#L95

3. Howerver, there is no those kind of attributes under "instance", and we didn't create them at here:
https://github.com/openstack/ceilometer/blob/master/ceilometer/nova_client.py#L58
https://github.com/openstack/ceilometer/blob/master/ceilometer/nova_client.py#L72

So Frédéric FAURE, can you help confirm you findings? You can try only start one of the "ceilometer-collector" or "ceilometer-agent-compute" to make sure the metadata value source. Thanks.

Revision history for this message
Feilong Wang (flwang) wrote :
Download full text (4.8 KiB)

Besides, I reviewed the meters you posted again and I believe all of them collected by POLLSTER, not notification. Because if it's collected by notification, you will see an attribute like this: "event_type":"compute.instance.exists" in the metadata. I posted my
test result for your reference.

So based on above analysis, I think my guess is confirmed. Unfortunately, seems there is no good way to get "architecture", "os_type" and "reservation_id". I will try to use conductor api instead of calling flavor REST API to get all of those info.

jd__, any comments?
--------------------------------------------------------------------------------
[
{
"counter_name":"instance",
"user_id":"77b8322b556547398e429affd64c1b62",
"resource_id":"ffc9312d-918f-4714-9714-dc5394aa419c",
"timestamp":"2013-08-08T14:09:51",
"resource_metadata":{
"ephemeral_gb":"",
"display_name":"flwang_3",
"name":"instance-00000001",
"disk_gb":"",
"kernel_id":"9f5054ea-1a5c-42dd-96eb-1f61ca238adf",
"ramdisk_id":"46b8a394-a1e9-4f2f-905c-5f2ba7e31733",
"host":"f284f693158ce5e67a28ca55095d7ffd76859ff2730bfd293cdf2f31",
"memory_mb":"",
"instance_type":"1",
"vcpus":"",
"root_gb":"",
"image_ref":"db2a5bd7-570d-4a23-9cae-3ef224471850",
"architecture":"",
"os_type":"",
"OS-EXT-AZ:availability_zone":"nova",
"reservation_id":"",
"image_ref_url":"http://127.0.0.1:8774/bcee647ff0ac408bb3f42bd94be6a5e8/images/db2a5bd7-570d-4a23-9cae-3ef224471850"
},
"source":"openstack",
"counter_unit":"instance",
"counter_volume":1.0,
"project_id":"f902f810b76f4fbfa0f22d75e4b1dc5a",
"message_id":"3187fdee-0034-11e3-a37a-843a4b31d888",
"counter_type":"gauge"
},
{
"counter_name":"instance",
"user_id":"77b8322b556547398e429affd64c1b62",
"resource_id":"ffc9312d-918f-4714-9714-dc5394aa419c",
"timestamp":"2013-08-08T14:00:43",
"resource_metadata":{
"ephemeral_gb":"",
"display_name":"flwang_3",
"name":"instance-00000001",
"disk_gb":"",
"kernel_id":"9f5054ea-1a5c-42dd-96eb-1f61ca238adf",
"ramdisk_id":"46b8a394-a1e9-4f2f-905c-5f2ba7e31733",
"host":"f284f693158ce5e67a28ca55095d7ffd76859ff2730bfd293cdf2f31",
"memory_mb":"",
"instance_type":"1",
"vcpus":"",
"root_gb":"",
"image_ref":"db2a5bd7-570d-4a23-9cae-3ef224471850",
"architecture":"",
"os_type":"",
"OS-EXT-AZ:availability_zone":"nova",
"reservation_id":"",
"image_ref_url":"http://127.0.0.1:8774/bcee647ff0ac408bb3f42bd94be6a5e8/images/db2a5bd7-570d-4a23-9cae-3ef224471850"
},
"source":"openstack",
"counter_unit":"instance",
"counter_volume":1.0,
"project_id":"f902f810b76f4fbfa0f22d75e4b1dc5a",
"message_id":"ea706532-0032-11e3-92b2-843a4b31d888",
"counter_type":"gauge"
},
{
"counter_name":"instance",
"user_id":"77b8322b556547398e429affd64c1b62",
"resource_id":"ffc9312d-918f-4714-9714-dc5394aa419c",
"timestamp":"2013-08-08T14:00:03.803000",
"resource_metadata":{
"state_description":"",
"event_type":"compute.instance.exists",
"availability_zone":"None",
"ephemeral_gb":"0",
"instance_type_id":"2",
"deleted_at":"",
"reservation_id":"r-3tsht3wx",
"memory_mb":"512",
"user_id":"77b8322b556547398e429affd64c1b62",
"hostname":"flwang-3",
"state":"active",
"launched_at":"2013-08-08T08:44:06.000000",
"ramdisk_id":"46b8a394-a1e9-4f2f-905c-5f2ba7e31733",
"access_ip_v6":"None",
"d...

Read more...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ceilometer (master)

Fix proposed to branch: master
Review: https://review.openstack.org/41818

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ceilometer (master)

Reviewed: https://review.openstack.org/41818
Committed: http://github.com/openstack/ceilometer/commit/1605e6ff13aef9281d48ec46fa99481545d60ab0
Submitter: Jenkins
Branch: master

commit 1605e6ff13aef9281d48ec46fa99481545d60ab0
Author: Fei Long Wang <email address hidden>
Date: Tue Aug 13 19:44:53 2013 +0800

    Fix empty metadata issue of instance

    Based on current implement, some metadata can't be extracted by
    pollster, such as architecture, reservation_id, ephemeral_gb, etc.
    This patch will try to get those metadata if we can and remove
    those metadata what can't be pulled by pollster.

    Fixes bug 1202749

    Change-Id: I1f08c4eaa1cfacb612097cd0e90629d682f8acc9

Changed in ceilometer:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in ceilometer:
milestone: none → havana-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in ceilometer:
milestone: havana-3 → 2013.2
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.