Make chunk size configurable for swift-backed glance
Bug #1436647 reported by
Jordan Callicoat
This bug affects 2 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack-Ansible |
Fix Released
|
Medium
|
Andy McCrae | ||
Icehouse |
Fix Released
|
Medium
|
Andy McCrae | ||
Juno |
Fix Released
|
Medium
|
Andy McCrae | ||
Trunk |
Fix Released
|
Medium
|
Andy McCrae |
Bug Description
We are currently hard-coding the chunk size to 200MB when glance is backed by swift. The result is that customers with large images get a ton of segments created for each image. Once the number of objects gets into the thousands you notice lag on swift operations. Once it gets into the tens of thousands we start seeing very noticeable impact, timeouts booting instances, etc. I propose making the option configurable, and using the same 5G (5120M) chunk size used by Cloud Files as the default value.
Changed in openstack-ansible: | |
status: | Triaged → In Progress |
To post a comment you must log in.
I've added the 2 large object variables as ansible variables so these can be overwritten if you want.
I haven't changed the default chunk size though - the chunk is buffered in memory so I'm not sure that adjusting the default is a sensible idea - If your host has a large amount of memory then it may be fine but I don't think we should mandate that as a 1 size fits all best value.
Additionally, if swift is performing poorly that should probably be investigated it seems like swift shouldn't have an issue handling large amount of objects.