The second idea could also help when have full disks with expired error limiting
[204, 507, 507] => *second round to throw tombstones on handoffs which has a fair chance to reap .data files and free up space.
/me shrugs
A combination of the both might be good - if we can find an elegant way to model it that isn't overly complicated and causes annoying additional requests in the common case.
This can still cause problems.
When the return codes are [204, 404, 404] we currently return 404:
https:/ /github. com/openstack/ swift/blob/ bcd0eb70afacae1 483e9e53d5a4082 536770aed8/ test/unit/ proxy/controlle rs/test_ obj.py# L308
However, with a 2 region 3 replica geo cluster using write affinity - there's always some partitions where two out of three replicas are on handoffs.
That cluster topology isn't ideal for a number of reasons, but we *could* do better at the proxy later.
idea #1) use timestamps (or the lack thereof) to distinguish if the 204 response "missed" an earlier delete - i.g.
[204 ts2, 404 ts1, 404 ts1] => 404
[204 ts2, 404 ?, 404 ?] => 204
idea #2) if *any* response is 204, make additional DELETE requests to handoffs - i.e.
[404, 404, 404] = > 404
[404, 404, 204] => *second round to replica count handoffs*
The second idea could also help when have full disks with expired error limiting
[204, 507, 507] => *second round to throw tombstones on handoffs which has a fair chance to reap .data files and free up space.
/me shrugs
A combination of the both might be good - if we can find an elegant way to model it that isn't overly complicated and causes annoying additional requests in the common case.