Enforce uniqueness of flow/atom details uuids/keys
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
taskflow |
Confirmed
|
High
|
Timofey Durakov |
Bug Description
In the current *persistence* backends there is an implied rule that flow/atom details objects have unique uuids and should not conflict on these with any other flow/atom details (ever). Having an implied rule is not the best approach, and we should have checks in place (that raise exceptions) when a duplicate uuid is saved so that at run-time/save-time it blows up when this kind of conflict is triggered. This makes the rule explicit and validated; making bugs and issues less and makes the persistence behaviors that much more predictable/
Some solutions/ideas:
-> Rename uuid -> key (since its really a unique 'key') [this can be a later change if needed]
-> Have backends able to influence what is a 'valid' key/uuid (but have all implementations raise errors when duplicates are found)
-> Have backends validate uniqueness *before* saving any results (this is more applicable to backends that are not transactional, such as the filesystem backend; the sqlalchemy and zookeeper backend use transactions so checking later should be fine, since the transaction will not be committed until all validations have occurred).
Changed in taskflow: | |
importance: | Undecided → High |
Changed in taskflow: | |
status: | New → Confirmed |
Changed in taskflow: | |
assignee: | nobody → Timofey Durakov (tdurakov) |