Comment 2 for bug 1934142

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to swift (master)

Reviewed: https://review.opendev.org/c/openstack/swift/+/798849
Committed: https://opendev.org/openstack/swift/commit/574897ae275ce94257096c56a9bdc494bc0a39ba
Submitter: "Zuul (22348)"
Branch: master

commit 574897ae275ce94257096c56a9bdc494bc0a39ba
Author: Alistair Coles <email address hidden>
Date: Wed Jun 30 14:05:23 2021 +0100

    relinker: tolerate existing tombstone with same timestamp

    It is possible for the current and next part power locations to
    both have existing tombstones with different inodes when the
    relinker tries to relink. This can be caused, for example, by
    concurrent reconciler DELETEs that specify the same timestamp.

    The relinker previously failed to relink and reported an error when
    encountering this situation. With this patch the relinker will
    tolerate an existing tombstone with the same filename but different
    inode in the next part power location.

    Since [1] the relinker had special case handling for EEXIST errors
    caused by a different inode tombstone already existing in the next
    partition power location: the relinker would check to see if the
    existing next part power tombstone linked to a tombstone in a previous
    part power (i.e. < current part power) location, and if so tolerate
    the EEXIST.

    This special case handling is no longer necessary because the relinker
    will now tolerate an EEXIST when linking a tombstone provided the two
    files have the same timestamp. There is therefore no need to search
    previous part power locations for a tombstone that does link with the
    next part power location.

    The link_check_limit is no longer used but the --link-check-limit
    command line option is still allowed (although ignored) for backwards
    compatibility.

    [1] Related-Change-Id: If9beb9efabdad64e81d92708f862146d5fafb16c

    Change-Id: I07ffee3b4ba6c7ff6c206beaf6b8f746fe365c2b
    Closes-Bug: #1934142