ptr record set faile swith "Duplicate Record" ptrdname ist not changed

Bug #1930411 reported by Tim Evers
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Designate
New
Undecided
Unassigned

Bug Description

If test.name.invalid. is already set as ptrdname for RegionOne:dace4205-c7a5-4c40-ad6a-25727460924b, the call to

openstack --os-project-id=cbda1cca966f46bc05a657afg2fe18e ptr record set RegionOne:dace4205-c7a5-4c40-ad6a-25727460924b test.name.invalid. --ttl 4800

fails with

"Duplicate Record"

(RESP BODY: {"code": 409, "type": "duplicate_record", "message": "Duplicate Record", "request_id": "req-ba1d83db-4541-4ceb-bfba-192628938c94"})

which as far as I chased it down comes from designate-central which tries to create a record in the database which has a unique key on hash=md5(record.recordset_id + record.data). Since none of that changes we get the conflict while updating the db.

While the old records are supposed to be removed before creating a new one in central/service.py:_set_floatingip_reverse it seems to me that because this happens in a transaction and is not visible to the api called in turn by the worker to update/delete the record status after actual removing it, the old record stays visible to the process creating the new record.

It seems to me that the transaction around update_floatingip() needs to be commited once the old records are removed to expose this change to the worker updating the status.

Tim Evers (te-8)
description: updated
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.