[6.0] osv.memory objects may be created with wrong field values for dynamic fields in some cases
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Odoo Server (MOVED TO GITHUB) |
Fix Released
|
Medium
|
Olivier Dony (Odoo) |
Bug Description
Hello,
this is on trunk server rev #2635 using addons rev #4734
osv.memory objects are not properly written from the client into the memory and are also wrongly read if there are several instancse of the same osv.memory class. This is very annoying as it lead to totally wrong behavior when doing twice the same action in the same server session for instance.
A simple way to reproduce the bug is with the sale and stock module + demo data.
1) create a new sale order, add one product line with say product "ATX" and 12 items, validate the order.
2) go to the related expedition picking and force the expedition.
3) click on the right side "return picking" of the expedition picking to return products.
Just specify you want to return 1 product only.
Check that you have a return with only 1 product, the first time it works fine.
Do the exact same process again without restarting the server. Yes, your return picking will now have 12 products, not just 1 and you'll also get a strange popup after you do the picking.
What is happening?
To discover I instrumented a bit stock/wizard/
Here is my instrumentation patch:
=== modified file 'stock/
--- stock/wizard/
+++ stock/wizard/
@@ -148,6 +148,11 @@
date_cur = time.strftime(
move_ids = [m.id for m in [line for line in pick.move_lines]]
+ print "self", self
+ print "ids", ids
+ print "picking", pick.id, pick.name
+ print "move_ids", move_ids
+ print "self.read(cr, uid, ids[0])", data
for move in move_obj.browse(cr, uid, move_ids):
if not new_picking:
Now let's run the described scenario again while looking the logs:
===== 1st iteration:
client-side XML/RPC:
DEBUG_RPC:
...
server side:
self <stock.
ids [1]
picking 94 OUT/00032
move_ids [133]
self.read(cr, uid, ids[0]) {'return133': 1.0, 'id': 1, 'invoice_state': 'none'}
===== 2nd iteration:
client-side XML/RPC:
DEBUG_RPC:
...
server side:
self <stock.
ids [2]
picking 96 OUT/00033
move_ids [135]
self.read(cr, uid, ids[0]) {'return133': False, 'invoice_state': 'none', 'id': 2, 'return135': 12.0}
So as you see, my second picking id 96 has only one move id 135. In the GTK clients things are all right.
But, when doing a self.read(cr, uid, [2]), the server read get totally wrong values: it remembers the 'return133' param while it's False now, and 'return135' is left to 12.0 while it is supposed to be assigned at 1.0, just like in the previous case.
So it looks like create/write/read behave wrongly when there are several instance of an osv_memory. This leads to critical bugs as you can't do a return here for instance!
Hope this helps, again this is critical.
Changed in openobject-server: | |
milestone: | none → 6.0 |
Changed in openobject-server: | |
assignee: | nobody → Olivier Dony (OpenERP) (odo) |
summary: |
- [6.0] osv.memory objects critically broken when several instances + [6.0] osv.memory objects may write/create wrong field values for + dynamically created fields |
Changed in openobject-server: | |
status: | New → In Progress |
importance: | High → Medium |
summary: |
- [6.0] osv.memory objects may write/create wrong field values for - dynamically created fields + [6.0] osv.memory objects may be created with wrong field values for + dynamic fields in some cases |
Changed in openobject-server: | |
status: | In Progress → Fix Released |
I can confirm this bug for 5.2dev (yeah, it is that old).
When we where testing/developing the account_ chart_update module and the pxgo_account_ admin_tools (extra-addons). We made them compatible with 5.2dev back then and we noticed some problems like this one: https:/ /bugs.launchpad .net/openobject -client/ +bug/586252
But aside from that bug, we also noticed that, for wizards that created osv_memory records linked to the main wizard, if the wizards were run again, they would show up duplicated/ unexpected records: To prevent this from happening we have to redefine the search method for osv_memory objects and filter by hand the osv_memory objects with the correct 'wizard_id'.
I don't know if the problem is in the way the ORM is mapping/filtering the osv_memory records, or that it doesn't clean some parameters (remember the context={} problem?) or something like that, but in the end osv_memory wizards didn't work properly on 5.2dev and still look half-broken on 6.0dev.