A progress update on this: to help me clarify some issues, I've started with some refactoring of the current copy-increasing code. There was still a lot of stuff that was difficult to unit test (and sometimes lacked unit tests), so I've been breaking things down into simpler units of functionality. I think I'd like to attack this bug from more than one angle, to eventually have more than one behavior that can address this scenario. For example, the current downgrading behavior has two independent avenues by which files will be downgraded. The quicker behavior is based on the store atime, but if that happened to fail for whatever reason, files would eventually be downgraded based on their verification timestamp. Of course, each of these mechanisms should work in a vacuum, should never count on another mechanism taking up the slack. One thing Dmedia doesn't do yet is speculatively create new copies based on recent access patterns. For example, if you've been working with a certain set of files often on your laptop, and those files don't yet exist on your workstation, it would be reasonable for Dmedia to create copies of these files on your workstation even before you try to use them on the workstation (which would trigger the on-demand download from your laptop). As Dmedia can quickly respond to low drive space events (even when space is being used up by applications other than Dmedia), it's entirely reasonable for Dmedia to keep your drives nearly full at all times, as this provides better availability between devices, and means Dmedia has the wiggle room needed to do any needed shuffling. So I'm thinking of this in terms of two complementary actions, one that would be provided as a tweak to the current copy-increasing behavior, the other that would be implemented in our new 5th "shuffling" behavior: 1) preemptively create a 4th copy based on access patterns, assuming space is available 2) reactively create a 4th copy when the very specific scenario in this bug is detected I'm strongly leaning toward doing (1) first, because I always prefer to improve/fix existing functionality before adding new functionality. I'm also a bit leery about adding a new behavior without a lot of thought and care. The current 4 behaviors were a *long* time in the making, and they're very fool proof because they're driven by the data model and the CouchDB view functions. It's a design that has proven extremely robust even in the face of multiple peers all simultaneous updating the metadata and creating untold conflicts. Not only that, but these 4 behaviors are ongoing simultaneously even on a single node. So it wasn't an easy feet to get them all interacting well together, to make sure the overall Dmedia behavior always moves in the correct direction over time. This 5th behavior also crosses some data boundaries in a way the others don't (because it must consider the available free space on multiple stores, plus the files present on each). So it's not easy to build this 5th behavior using the same tried and true patterns of the other 4. This 5th behavior is important, and I'm certain we need to add, but it's complex new territory and should be regarded with some suspicion for a while. So I think it's more prudent to first address this bug with a modification to the existing copy-increasing behavior, because I feel it's reasonable for us to be very confident about its correctness. I'm thinking it would go something like this: 1) As currently, process the backlog of fragile files rank=(0 through 5), creating new copies up to MIN_FREE_SPACE (16 GiB) of avail space on a drive; when done, get the ending update_seq from the dmedia-1 DB 2) Create a 4th copy of files at rank=6, but this time only up to RECLAIM_BYTES (64 GiB) of avail space on a drive; we probably want to time limit this so we move back to truly fragile files before too long 3) Then enter the event-based loop using the _changes feed, but starting at the update_seq from (1), so we pick up any new fragile files that occurred while we were doing step (2) It's taken some refactoring to make this possible, but I'm close to being done now. I like this approach because it requires no new views, is built on very tried and true designs.