Basically, stop paying attention to 'content_summary[...]' to be accurate about the committed text size.
This means that for files with content filtering, we will read them off disk, apply the filtering, and then see that nothing has changed.
I expect performance will suck a little bit, as we'll be re-reading all the files that have a content filter. But at least it won't cause bogus data to be inserted into the repository on every merge commit.
The associated branch has a potential fix.
Basically, stop paying attention to 'content_ summary[ ...]' to be accurate about the committed text size.
This means that for files with content filtering, we will read them off disk, apply the filtering, and then see that nothing has changed.
I expect performance will suck a little bit, as we'll be re-reading all the files that have a content filter. But at least it won't cause bogus data to be inserted into the repository on every merge commit.