Job reduction reduces against jobs being worked on.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Gearman |
New
|
Wishlist
|
Brian Aker |
Bug Description
Not necessarily a bug, but a semantic questionable details.
The job reduction (using the uniqueid) just checks against the queue. However, if the job is actively being worked on, it still gets reduced.
Depending on the worker semantic, that might be troublesome. Example;
* We transfer DB primary-key as job parameters to a worker that reads the row and updates a cache.
* We set unique='-'
* If the worker has gotten the job, read the data, but not written to the cache yet...
* ... and a table update happens, causing a new job to be issued with the same parameters.
* The second job gets reduced because it's unique matches.
* However, the cache now contains stale data.
We could set a unique that's a hash of the data, so it'll be different, but to compute that hash would be very expensive and cause user-facing slowdowns. Which is exactly why we use gearmand to asynchronously do the heavy lifting.
Instead I propose this change that optionally lets you run gearmand in a mode, where job reduction is only done against jobs in the queue, and NOT ones with workers.
Patch attached and at http://
Changed in gearmand: | |
status: | New → Fix Committed |
status: | Fix Committed → New |
One quick note, are you aware that '-' has a special value as a unique?