Asset issue with multiple horizon instances
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack-Ansible |
Won't Fix
|
Critical
|
Matt Thompson | ||
Juno |
Won't Fix
|
Critical
|
Unassigned | ||
Trunk |
Won't Fix
|
Critical
|
Matt Thompson |
Bug Description
On a customer deployment, we have 5 horizon containers across 5 nodes. In the horizon_common role we run manage.py collectstatic --noinput and manage.py compress --force. Since the assets should be identical across all of the machines, the compressor *should* generate files with exactly the same name. We've run into instances where that's not the case so if the response is made by the horizon app on node0 but the assets are served from node2 (and node2's assets are named differently) then the page will appear unstyled because the browser received a 404 trying to retrieve the assets.
The current in-production solution is to set [1] COMPRESS_OFFLINE = True in local_settings.py. This, however, may not be an appropriate solution since this setting will prevent django_compressor from dynamically creating missing asset files [2]. Also, the problem we're seeing seems that it may be a problem upstream [3]. Also, for what it's worth, it seems that Horizon [4] sets a 'hash' for COMPRESS_
There are a few problems I can think of:
- mtime is involved in the hashing of the files so the asset names will be different if the mtime has a delta
- the files are different and the response is served from a different container than the assets are served from
- we're running the commands in the wrong order (should be running compress then collectstatic) (read the bugs linked for others doing this)
- we should be setting COMPRESS_OFFLINE = True in our local_settings.py to prevent re-compression on the fly
- we should be setting a documented option for COMPRESS_
Which of these possibilities (or combination thereof) is actually causing this problem is unclear at the moment.
[1]: http://
[2]: https:/
[3]: https:/
[4]: https:/
[5]: http://
Changed in openstack-ansible: | |
assignee: | nobody → Matt Thompson (mattt416) |
tags: | added: juno-backport-potential |
Horizon defaults COMPRESS_ CSS_HASHING_ METHOD to 'hash' (not documented, but is treated the same as 'content'):
https:/ /github. com/openstack/ horizon/ blob/stable/ icehouse/ openstack_ dashboard/ settings. py#L145 /github. com/openstack/ horizon/ blob/stable/ juno/openstack_ dashboard/ settings. py#L212
https:/
In my lab running os-ansible- deployment master (deployed 15/12/2014), I uninstalled all horizon-related pip packages, removed /usr/local/ lib/python2. 7/dist- packages/ static, and re-ran 'horizon-manage.py collectstatic --noinput; horizon-manage.py compress --force' on a single container. I then verified that all files recreated in /usr/local/ lib/python2. 7/dist- packages/ static were identical to those in another horizon container. When I changed COMPRESS_ CSS_HASHING_ METHOD to 'mtime' and reran the above, the contents of /usr/local/ lib/python2. 7/dist- packages/ static were no longer identical.
Regarding the order in which we run collectstatic/ compress, I believe this should be fine. There is actually a pending review [1] to add some documentation to horizon to clarify how this is done (and the order they outline matches the order in which we do it).
[1]: https:/ /review. openstack. org/#/c/ 141885/