Comment 11 for bug 1626675

Revision history for this message
Zane Bitter (zaneb) wrote :

I ran a test creating approximately 150 nested stacks - 25 in parallel at the first level, and 6 in series at the next level, with each stack taking ~10s - with a large (~800KB) files dict. I had hoped that this could form the basis of an automated test that we could use to (a) bisect the repo now, and (b) perhaps incorporate in the gate in the future.

The results were as you might hope - with the patch https://git.openstack.org/cgit/openstack/heat/commit/?id=fef94d0d7366576313883d9cfd59d775ea9e9907 the increase in memory is only around 16MB per worker during stack create, vs. more like 145MB immediately prior to that patch. The results on the newton-rc2 build are comparable, so unfortunately whatever regression has occurred is not triggered by my test.

I can try adding features to exercise in the test, but there'd be a lot of guesswork involved. I think our best chance at narrowing this down is probably to bisect the Heat repo testing against tripleo-heat-templates from the Mitaka release or early Newton. It's also possible that some change in tripleo-heat-templates could have triggered a large increase in memory use - if that proves to be the case (which should be obvious from the first test of the old templates against newton-rc Heat) then it'd be the reverse: bisect t-h-t running on newton-rc Heat until we find what made it jump.

(Incidentally, for completeness also I tried the same test with convergence enabled, and the increase was slightly higher - around 23MB - but not vastly so. Also the delete phase appears to be piling on a higher memory increase than the create phase - it grows the memory by an additional 91MB for the legacy path and 63MB for the convergence path. This might be partly because it happens faster, the deepest nested resources don't take 10s to delete, only to create.)