Redis garbage collection

Bug #797749 reported by Lars Butler
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenQuake (deprecated)
Fix Released
Lars Butler

Bug Description

Jobs should clean up after themselves when they are complete. If the Redis holds on to too much historical data, the KVS can become really slow, which affects the performance of the OQ engine as a whole.

Changed in openquake:
importance: Undecided → High
status: New → Confirmed
description: updated
Changed in openquake:
assignee: nobody → Lars Butler (lars-butler)
Revision history for this message
Lars Butler (lars-butler) wrote :

I did some initial investigation and brainstorming for this bug. Here's what I came up with:

1) Key generation functions

There are 3 functions in which generate various types of keys for a 'job':

generate_site_key(job_id, block_id)
generate_product_key(job_id, product, block_id="", site="")

These are the keys that we care about the most when it comes to garbage collection in Redis.

2) 'Garbage collected' decorator

I think that if we can capture these keys in a list somewhere, we can ask Redis to delete ( each key in the list when the job is complete.

One way I though of capturing these keys is by adding a @garbage_collected decorator to each of the 3 functions mentioned above. For example:

def generate_site_key(job_id, block_id)

Assuming that job_id will always be the first argument for these 3 key generation functions, the @garbage_collected decorator would do a 'rpush JOB_KEYS!<job_id> <generated_key>', where <job_id> is the value of the job_id arg and <generated_key>is the value returned by the decorated function. Thus, each time any kind of job key is generated, that key will be stored in a list (in Redis).

This doesn't guarantee that we keep track of ALL job data, but we should capture at least 99%. There is a risk that we might miss something, but what I've proposed here is--I think--a good first step.

3) Doing the garbage collection

When a job is finished, we should be able to called a KVS gc() function to clean everything up. Here's how I think this can be implemented:

In openquake.kvs.__init__:

def gc(job_id):
    # Get the list of keys for this job.
    # These keys are all stored in the KVS in under the key 'JOB_KEYS!<job_id>', where <job_id> is the input to the function.
    # Use the Redis client to delete ( each of these keys.
    # Then delete the contents of 'JOB_KEYS!<job_id>'. We don't need this list anymore.
    # Return the number of items deleted.

Revision history for this message
Lars Butler (lars-butler) wrote :

Open item: This could be difficult to test and may require some clever mocking of the OQ engine.

Revision history for this message
Gabriele Favalessa (favalex) wrote :

I trust your analysis of which are the most important keys to gc and I like
the approach with decorators you suggested.

I looked at the redis documentation and I couldn't find a way of partitioning the
key space that could avoid keeping track by hand of the keys belonging to a
particular job (the docs mention not to use 'KEYS' for that purpose though).

Your solution of using a redis list to keep note of the keys is reasonable (the
docs seem to suggest using a set (see but I don't
think we will generate duplicate keys and I don't see any other reason for
using sets).

Revision history for this message
Lars Butler (lars-butler) wrote :

I just had a discussion about this with Muharem ( Here's what we talked about:

There are a few issues with my initial solution. For starters, my solution for reducing Redis bloat actually contributes (if only temporarily) to Redis bloat. I think technically my solution works, but there has to be a better way.

One suggestion was to make the Redis garbage collection an asynchronous process (which is either invoked 1 time by the OQ engine when a job completes, or runs periodically as a daemon). This also involves refactoring our key generation scheme a bit; instead of generating GUIDs to guarantee uniqueness, we could simply have a 'next key' function (kind of like a SQL autoincrement). We would use 'incr' ( to handle this. In our Redis store, we would simply have to store a single integer under the key "JOB_KEY"--or something like that--and each time we needed a new job ID we would increment and grab the new value.

With this solution we might also have to store a list of 'completed job IDs', which the GC daemon can query so it knows what to clean up. If we can guarantee uniqueness of job ID tokens through convention in our software (perhaps by surrounding the job ID integer with || || or something), we can simply query Redis to get all keys matching a wildcard (

I like the idea of running the garbage collection asynchronously. However, I would argue that this is a crucial piece of 'job completion' and therefore could conceivably be part of the OQ engine. (But it doesn't have to be.)

If this is implemented as a separate process/daemon, one part about this I don't like is that it will require additional system configuration for an OpenQuake software deployment (for example, we may have to set up a cron job or something equivalent).

Revision history for this message
Lars Butler (lars-butler) wrote :

Today I thought of another way we could approach this.

I don't really like the idea of having to maintain a cron configuration for separate redis GC daemon. Other than that, I like the concept of having a separate dedicated process for this. So, it's possible we may be able to accomplish the same thing with a scheduled celery task (which would be loosely managed by the OQ engine). I haven't used scheduled tasks much in celery but I'll do some research to see if it's an appropriate solution.

Revision history for this message
Lars Butler (lars-butler) wrote :

Or it could even be a single task that is invoked when the engine completes a calculation. This would satisfy the requirement of asynchronous-ness (so the engine can just finish, write results to the db, and let a remote worker do the GC).

Changed in openquake:
status: Confirmed → In Progress
milestone: none → 0.4.1
Changed in openquake:
milestone: 0.4.1 → none
tags: added: current performance
tags: added: current-cycle
removed: current
Revision history for this message
Mattia Barbon (mattia.barbon) wrote :

Note (you probably know that, but just in case): if switching to integer job keys, the key match code in openquake/hazard/ (and elsewhere) needs to be stricter.

John Tarter (toh2)
Changed in openquake:
milestone: none → 0.4.3
Changed in openquake:
milestone: 0.4.3 → 0.4.1
tags: added: nosql
removed: current-cycle
tags: added: sys-quality
Revision history for this message
Lars Butler (lars-butler) wrote :

I have submitted two patches for this: <-- landed in master <-- still in review

Once 323 lands, this bug will be resolved.

Changed in openquake:
status: In Progress → Fix Committed
Changed in openquake:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Related blueprints

Remote bug watches

Bug watches keep track of this bug in other bug trackers.