Hi, Mistral's team.
There is some problems with a caching of Workflow spec if we have multiple instance of Mistral Engine.
For example, we execute some workflow which has 25 serial tasks. If we have only one Mistral Engine that the workflow spec will be transformed to Python object only one time. But if we have, for example, 10 Mistral Engine that the workflow spec will be transformed to Python object 10 times! https://github.com/openstack/mistral/blob/master/mistral/lang/parser.py#L214
Every transformation takes from 1 to 1.5 seconds. And it adds 12 seconds to execution time for every workflow.
There are some possible solutions:
* To use the distributed cache
* To serialize wf_v2.WorkflowSpec to the database like BLOB
* To bound the execution of workflow to one Mistral Engine
* Don't cache by execution id, cache only by workflow id. But it doesn't help in my case. I generate a workflow for every execution.
Vitalii, this is not a bug. It's a trade-off solution that we came to. Trade-off between using a distributed cache (which has a lot of downsides too) and not caching at all.
A spec will be parsed 10 times, yes, but 1 time per engine. Which is OK. For big workflows it's helpful anyway because caches quickly warm up on any operation that an engine performs (such as "on_action_ complete" ) and the rest of the workflow will be running quicker.