Some thoughts as to how we this bug might be addressed: (I don't like all, if any, of these options, I am just writing them down for the record) 1. sync what we have on disk regardless of it's "application level" state. This means we need ssync to be able to open a disk file that has expired, so adding that capability to diskfile. Then the ssync sender can replicate the expired diskfile to the receiver. There's at least two problems with this: (a) the receiving object server will reject an attempt to PUT an object with delete-at older than present time and (b) in case of EC sync jobs, the ssync sender will be unable to reconstruct a frag for an expired object because other nodes will not serve up their expired fragments to the reconstructor. We would need object server support to selectively GET and PUT expired objects. Plus we would sync potentially large data content that will never be served to client which seems wasteful. (Might this get more feasible if the only thing needing to be sync'd is metadata, so in case of EC we shouldn't need to GET other frags, and we'd just need the receiving POST to be forced to accept the older x-delete-at) (1a. to avoid moving large amounts of data unnecessarily, PUT an empty object to the receiver in order to achieve hash consistency. This has same blockers as 2 i.e. can't GET or PUT expired objects) (1) seems like a lot of work/change to replicate data that clients will never read. 2. ssync performs expiry deletion: ssync_sender, on getting a DiskFileExpired during send_put, gets the delete-at time from the object metadata and sends a delete to the receiver with the delete-at time and X-If-Delete-At header, effectively doing the job of the expirer (which is my dislike of this option, we'll end up duplicating expirer logic in ssync). This attempt to expire the remote object will cause a 412 for the DELETE subrequest with X-If-Delete-At if the receiving node has a diskfile with a different x-delete-at (including an older x-delete-at!), which is quite possible given that its out of sync but not necessarily missing altogether. That's bad because it will contribute to the ssync receiver reporting failure for the entire ssync job, and possibly terminating early. Then ssync would keep retrying the same thing, until...IDK, forever?? :/ The ssync sender could also delete the local object file for good measure, but that whilst that makes the sender and receiver consistent, it creates inconsistency with other nodes. That inconsistency would be fixed by subsequent expirer or replication activity, so maybe that's ok. Note that the object server does NOT delete an object that has expired during GET handling, so there is no precedence for a process other than the expirer performing deletion of expired objects. Maybe there is a good reason to not do that?? Barring any gotchas with ssync doing the deletion, (2) might be the safest options in terms of always progressing towards the ultimately correct consistent state. 3. Ignore the expired object for sync - i.e. do not sync anything for the expired object and assume that the expirer will eventually expire all replicas/fragments, after which ssync will be able to sync a tombstone for the object, and bring consistency to the object. In meantime, suffix hashes will remain inconsistent between nodes and at least one node will be in a stale state, and durability of state is temporarily compromised (arguably not a big deal for an expired object, unless we lose all replicas of a newer but expired object and an older, non-expiring replica remains somewhere). (3) is obviously the simplest "solution"