Container DBs don't get VACUUMed
Bug #1541149 reported by
A.G. Nienhuis
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Object Storage (swift) |
Confirmed
|
Undecided
|
Paul Dardeau |
Bug Description
After deleting many objects the db file size remains large. Running VACUUM could fix this:
# cp $LARGE_CONTAINER ~/test.db
# du -sh test.db
629M test.db
# sqlite3 test.db VACUUM
# du -sh test.db
1.2M test.db
Changed in swift: | |
assignee: | nobody → Paul Dardeau (paul-dardeau) |
To post a comment you must log in.
In my operational experience containers that *used* to have a lot of data in them tend to get filled back up at some point.
My understanding is that when sqlite allocates new pages from a database that's been drained it won't start getting bigger again until all the un-vacuum'd space is used up - we could double confirm that is still true for new-ish versions of sqlite.
Anyway this is definitely a known behavior of the container storage servers. If we could figure out a way to reclaim space without a bunch of extra load or any availability risk to the client or consistency engine while the VACUUM is running - that'd be pretty freaking sweet.
There's some chance we might be able to make some progress during replication by avoiding an rsync of databases files that are half full of empty space?