commit 2511cfb6e48c5d03cd198ecf9f09f36db3caced8
Author: Chris Dent <email address hidden>
Date: Mon Nov 9 16:31:45 2015 +0000
A dogpile cache of gnocchi resources
What this does is store a key value pair in oslo_cache where the key
is the resource id and the value is a hash of the frozenset of
the attributes of the resource less the defined metrics[1]. When it
is time to create or update a resource we ask the cache:
Are the resource attributes I'm about to store the same as the
last ones stored for this id?
If the answer is yes we don't need to store the resource. That's all
it does and that is all it needs to do because if the cache fails
to have the correct information that's the same as the cache not
existing in the first place.
To get this to work in the face of eventlet's eager beavering we
need to lock around create_resource and update_resource so that
we have a chance to write the cache before another *_resource is
called in this process. Superficial investigation shows that this
works out pretty well because when, for example, you start a new
instance the collector will all of sudden try several
_create_resources, only one of which actually needs to happen.
The lock makes sure only that one happens when there is just
one collector. Where there are several collectors that won't be
the case but _some_ of them will be stopped. And that's the point
here: better not perfect.
The cache is implemented using oslo_cache which can be configured
via oslo_config with an entry such as:
The cache is exercised most for resource updates (as you might
expect) but does still sometimes get engaged for resource creates
(as described above).
A cache_key_mangler is used to ensure that keys generated by the
gnocchi dispatcher are in their own namespace.
[1] Metrics are not included because they are represented as
sub-dicts which are not hashable and thus cannot go in the
frozenset. Since the metrics are fairly static (coming from a yaml
file near you, soon) this shouldn't be a problem. If it is then we
can come up with a way to create a hash that can deal with
sub-dicts.
Reviewed: https:/ /review. openstack. org/203109 /git.openstack. org/cgit/ openstack/ ceilometer/ commit/ ?id=2511cfb6e48 c5d03cd198ecf9f 09f36db3caced8
Committed: https:/
Submitter: Jenkins
Branch: master
commit 2511cfb6e48c5d0 3cd198ecf9f09f3 6db3caced8
Author: Chris Dent <email address hidden>
Date: Mon Nov 9 16:31:45 2015 +0000
A dogpile cache of gnocchi resources
What this does is store a key value pair in oslo_cache where the key
is the resource id and the value is a hash of the frozenset of
the attributes of the resource less the defined metrics[1]. When it
is time to create or update a resource we ask the cache:
Are the resource attributes I'm about to store the same as the
last ones stored for this id?
If the answer is yes we don't need to store the resource. That's all
it does and that is all it needs to do because if the cache fails
to have the correct information that's the same as the cache not
existing in the first place.
To get this to work in the face of eventlet's eager beavering we resources, only one of which actually needs to happen.
need to lock around create_resource and update_resource so that
we have a chance to write the cache before another *_resource is
called in this process. Superficial investigation shows that this
works out pretty well because when, for example, you start a new
instance the collector will all of sudden try several
_create_
The lock makes sure only that one happens when there is just
one collector. Where there are several collectors that won't be
the case but _some_ of them will be stopped. And that's the point
here: better not perfect.
The cache is implemented using oslo_cache which can be configured
via oslo_config with an entry such as:
[cache]
backend_ argument = url:redis: //localhost: 6379
backend_ argument = db:0
backend_ argument = distributed_ lock:True
backend_ argument = redis_expiratio n_time: 600
backend = dogpile.cache.redis
The cache is exercised most for resource updates (as you might
expect) but does still sometimes get engaged for resource creates
(as described above).
A cache_key_mangler is used to ensure that keys generated by the
gnocchi dispatcher are in their own namespace.
[1] Metrics are not included because they are represented as
sub-dicts which are not hashable and thus cannot go in the
frozenset. Since the metrics are fairly static (coming from a yaml
file near you, soon) this shouldn't be a problem. If it is then we
can come up with a way to create a hash that can deal with
sub-dicts.
Closes-Bug: #1483634 2cd2ff5b8afecf1 bca0ba53788
Change-Id: I1f2da145ca8771