Concurrently update server's metadata are handled badly

Bug #1650188 reported by Zhenyu Zheng on 2016-12-15
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Unassigned

Bug Description

Currently, we have two APIs to update server's metadata:
1. update: only update/add the key=value user provided in this call
2. update_all: replace all previous added key=value pair with the key=value pair user provided in this call
they are using the same _update_instance_metadata method, differed only with one boolean key:
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/server_metadata.py#n78
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/server_metadata.py#n95

Then it will be handled in the below work flow:
I. get the server object
   http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/server_metadata.py#n108
II. handled in compute/api.py update_instance_metadata:
   http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n3689
   the difference of update_all and update is handled here:
   i) if we are using update all, the target meatadata will be set as the metadata passed in, say new_meta
   ii) if we are using update, the target meatadata will be set to old_meta + new_meta
III. then we just set instance.metadata to the target metadata, and call instance.save() to do the job.
IV. it is finally handled in DB:
    http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n2694
    here we compared the metadata read from DB and the target metadata we passed in:
    i) if we call update_all, the target metadata only contains new_meta, so key=value pairs that are in old_meta but
    not in new_meta will be deleted.
    ii) if we call update, the target metadata will contains new_meta + old_meta so old_meta will not be deleted.

The above mentioned process worked pretty well for general uses, but when we come to concurrently usage, problem may
occour:

For example, we have two concurrent meta_data update calls, say meta_A and meta_B, they are called at the same time
since the target meta is generated at the API level(in previous step I and II), it the two calls came in at the same
time, the instance.metadata got in these two calls will be the same (old_meta), here comes the problem, the target
meta for A call will be old_meta + A, target meta for B call will be old_meta + B, and they will then go the rest
of the process;

When it comes to the DB layer, step IV, as we only have one DB so it is first time come first serve style, lets say
call A has successfully handled, the metadata in DB is now old_meta + A, then we will handle call B, as the target
meta is old_meta + B hence A will be removed.

Tags: db Edit Tag help
Changed in nova:
assignee: nobody → Zhenyu Zheng (zhengzhenyu)
Roman Podoliaka (rpodolyaka) wrote :

I'm not sure we can solve this easily without introducing optimistic concurrency control to metadata REST API, e.g. something similar to e-tags, so that the caller has to provide an e-tag on update and will get 409 Conflict, if a concurrent request has been completed first.

Changed in nova:
status: New → Confirmed
importance: Undecided → Medium
tags: added: db

Fix proposed to branch: master
Review: https://review.openstack.org/411327

Changed in nova:
status: Confirmed → In Progress
Zhenyu Zheng (zhengzhenyu) wrote :

@Roman, what do you mean by e-tag? I'm not familiar with that, I'm now fixing it by adding an uuid lock.

Matt Riedemann (mriedem) wrote :

cdent and/or jaypipes might have input on the etag idea here too:

https://en.wikipedia.org/wiki/HTTP_ETag

Or I'm wondering if we can move the lock to the database layer somehow rather than lock in the REST API.

Change abandoned by Sean Dague (<email address hidden>) on branch: master
Review: https://review.openstack.org/411327
Reason: This review is > 4 weeks without comment, and is not mergable in it's current state. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

Fix proposed to branch: master
Review: https://review.openstack.org/534682

Changed in nova:
assignee: Zhenyu Zheng (zhengzhenyu) → Yikun Jiang (yikunkero)
Yikun Jiang (yikunkero) wrote :

I try to do some work around on this problem, as I explained on https://review.openstack.org/#/c/534682 msg.

The root reason of this error is:
1. we get origin meta in API, use this as baseline and update the increment change to origin meta, see ref[1].
2. then we get the db record again in db, and use the db record as baseline, see ref[2].
Obviously, the baseline of change one and origin one is not unified, finally, we get the wrong result when update concurrently.

[1] https://github.com/openstack/nova/blob/0fc702cc081bab09424f3d63f242ff8c8f1215ce/nova/compute/api.py#L4019
[2] https://github.com/openstack/nova/blob/0fc702cc081bab09424f3d63f242ff8c8f1215ce/nova/db/sqlalchemy/api.py#L2819

We try to use original meta or sys-meta as baseline to update meta, we try to refresh the object, by replacing the original metadata into instance object, but it seems sqla can't aware these change. So, it is NOT a way to solve this problem, :(

On the other hand, the ETAG is a good way to solve the API level problem, but is NOT a good choice for system-metadata, because it can be updated by compute or virt driver directly.

I think we need add CAS mechanism, like [3], that is raise some conflict error when update failed.
[3] https://review.openstack.org/#/c/202593

Yikun Jiang (yikunkero) on 2018-01-22
Changed in nova:
assignee: Yikun Jiang (yikunkero) → nobody

Change abandoned by Yikun Jiang (Kero) (<email address hidden>) on branch: master
Review: https://review.openstack.org/534682

Matt Riedemann (mriedem) wrote :

Bug 1836439 is related for updating flavor extra specs concurrently.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers