Horizon randomly fails to connect to the service APIs

Bug #1624791 reported by Paulo Matias
50
This bug affects 10 people
Affects Status Importance Assigned to Milestone
OpenStack Dashboard (Horizon)
Invalid
Undecided
Unassigned
OpenStack-Ansible
Fix Released
High
Jean-Philippe Evrard

Bug Description

This started occurring after upgrading to RC1. Before that, I was using Horizon's e7b4bdfe5d576766b34bf00cea3dcbcb42436420 plus cherry-picked changesets I2db4218e7351e0017a7a74114be6ac7af803476c and Idb58cebefab747f204e54ea6350db0852aec60f5.

Running nova, cinder, etc. from the CLI seems to work perfectly.

However Horizon randomly fails with "bad handshake: SysCallError(0, None)" when connecting to other services, including Keystone.

The symptoms are:

* Login is intermittent, it randomly succeeds or fails.

* After being lucky enough to login, dashboards would randomly fail to display information.

I tried to isolate and see if some of my 3 infra nodes was the culprit, without success.

I also tried to destroy and reconstruct the horizon and keystone containers multiple times without success.

I didn't try yet to downgrade Horizon to the e7b4bdfe5d576766b34bf00cea3dcbcb42436420 commit, I will try it ASAP and report the results here.

[Sun Sep 18 01:42:52.285253 2016] [wsgi:error] [pid 6470:tid 139771996858112] Traceback (most recent call last):
[Sun Sep 18 01:42:52.285256 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/tables.py", line 389, in allowed
[Sun Sep 18 01:42:52.285259 2016] [wsgi:error] [pid 6470:tid 139771996858112] limits = api.nova.tenant_absolute_limits(request, reserved=True)
[Sun Sep 18 01:42:52.285261 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py", line 949, in tenant_absolute_limits
[Sun Sep 18 01:42:52.285264 2016] [wsgi:error] [pid 6470:tid 139771996858112] limits = novaclient(request).limits.get(reserved=reserved).absolute
[Sun Sep 18 01:42:52.285266 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/v2/limits.py", line 100, in get
[Sun Sep 18 01:42:52.285269 2016] [wsgi:error] [pid 6470:tid 139771996858112] return self._get("/limits%s" % query_string, "limits")
[Sun Sep 18 01:42:52.285271 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/base.py", line 346, in _get
[Sun Sep 18 01:42:52.285273 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp, body = self.api.client.get(url)
[Sun Sep 18 01:42:52.285276 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py", line 480, in get
[Sun Sep 18 01:42:52.285278 2016] [wsgi:error] [pid 6470:tid 139771996858112] return self._cs_request(url, 'GET', **kwargs)
[Sun Sep 18 01:42:52.285280 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py", line 458, in _cs_request
[Sun Sep 18 01:42:52.285282 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp, body = self._time_request(url, method, **kwargs)
[Sun Sep 18 01:42:52.285285 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py", line 431, in _time_request
[Sun Sep 18 01:42:52.285287 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp, body = self.request(url, method, **kwargs)
[Sun Sep 18 01:42:52.285289 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py", line 396, in request
[Sun Sep 18 01:42:52.285299 2016] [wsgi:error] [pid 6470:tid 139771996858112] **kwargs)
[Sun Sep 18 01:42:52.285303 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../requests/api.py", line 56, in request
[Sun Sep 18 01:42:52.285307 2016] [wsgi:error] [pid 6470:tid 139771996858112] return session.request(method=method, url=url, **kwargs)
[Sun Sep 18 01:42:52.285310 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../requests/sessions.py", line 475, in request
[Sun Sep 18 01:42:52.285313 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp = self.send(prep, **send_kwargs)
[Sun Sep 18 01:42:52.285317 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../requests/sessions.py", line 596, in send
[Sun Sep 18 01:42:52.285320 2016] [wsgi:error] [pid 6470:tid 139771996858112] r = adapter.send(request, **kwargs)
[Sun Sep 18 01:42:52.285324 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../requests/adapters.py", line 497, in send
[Sun Sep 18 01:42:52.285328 2016] [wsgi:error] [pid 6470:tid 139771996858112] raise SSLError(e, request=request)
[Sun Sep 18 01:42:52.285330 2016] [wsgi:error] [pid 6470:tid 139771996858112] SSLError: ('bad handshake: SysCallError(0, None)',)
[Sun Sep 18 01:42:52.308704 2016] [wsgi:error] [pid 6470:tid 139771996858112] Unable to retrieve project list.
[Sun Sep 18 01:42:52.308729 2016] [wsgi:error] [pid 6470:tid 139771996858112] Traceback (most recent call last):
[Sun Sep 18 01:42:52.308734 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_auth/user.py", line 318, in authorized_tenants
[Sun Sep 18 01:42:52.308738 2016] [wsgi:error] [pid 6470:tid 139771996858112] is_federated=self.is_federated)
[Sun Sep 18 01:42:52.308742 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_auth/utils.py", line 365, in get_project_list
[Sun Sep 18 01:42:52.308747 2016] [wsgi:error] [pid 6470:tid 139771996858112] projects = client.projects.list(user=kwargs.get('user_id'))
[Sun Sep 18 01:42:52.308751 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../positional/__init__.py", line 101, in inner
[Sun Sep 18 01:42:52.308755 2016] [wsgi:error] [pid 6470:tid 139771996858112] return wrapped(*args, **kwargs)
[Sun Sep 18 01:42:52.308759 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/v3/projects.py", line 119, in list
[Sun Sep 18 01:42:52.308763 2016] [wsgi:error] [pid 6470:tid 139771996858112] **kwargs)
[Sun Sep 18 01:42:52.308767 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/base.py", line 74, in func
[Sun Sep 18 01:42:52.308771 2016] [wsgi:error] [pid 6470:tid 139771996858112] return f(*args, **new_kwargs)
[Sun Sep 18 01:42:52.308774 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/base.py", line 386, in list
[Sun Sep 18 01:42:52.308778 2016] [wsgi:error] [pid 6470:tid 139771996858112] self.collection_key)
[Sun Sep 18 01:42:52.308782 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/base.py", line 124, in _list
[Sun Sep 18 01:42:52.308797 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp, body = self.client.get(url, **kwargs)
[Sun Sep 18 01:42:52.308802 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneauth1/adapter.py", line 187, in get
[Sun Sep 18 01:42:52.308806 2016] [wsgi:error] [pid 6470:tid 139771996858112] return self.request(url, 'GET', **kwargs)
[Sun Sep 18 01:42:52.308810 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneauth1/adapter.py", line 344, in request
[Sun Sep 18 01:42:52.308814 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
[Sun Sep 18 01:42:52.308817 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneauth1/adapter.py", line 112, in request
[Sun Sep 18 01:42:52.308822 2016] [wsgi:error] [pid 6470:tid 139771996858112] return self.session.request(url, method, **kwargs)
[Sun Sep 18 01:42:52.308825 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../positional/__init__.py", line 101, in inner
[Sun Sep 18 01:42:52.308829 2016] [wsgi:error] [pid 6470:tid 139771996858112] return wrapped(*args, **kwargs)
[Sun Sep 18 01:42:52.308833 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneauth1/session.py", line 555, in request
[Sun Sep 18 01:42:52.308837 2016] [wsgi:error] [pid 6470:tid 139771996858112] resp = send(**kwargs)
[Sun Sep 18 01:42:52.308840 2016] [wsgi:error] [pid 6470:tid 139771996858112] File "/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneauth1/session.py", line 593, in _send_request
[Sun Sep 18 01:42:52.308844 2016] [wsgi:error] [pid 6470:tid 139771996858112] raise exceptions.SSLError(msg)
[Sun Sep 18 01:42:52.308848 2016] [wsgi:error] [pid 6470:tid 139771996858112] SSLError: SSL exception connecting to https://cloud.ufscar.br:5000/v3/users/f1a04d6876eb4252b78c01059ca557bd/projects: ('bad handshake: SysCallError(0, None)',)

Revision history for this message
Paulo Matias (paulo-matias) wrote :

Downgrading Horizon to e7b4bdfe5d576766b34bf00cea3dcbcb42436420 does not help. Maybe it is an issue with an upper constraint which might have been updated recently?

Revision history for this message
Paulo Matias (paulo-matias) wrote :

The issue is fixed when we remove pyopenssl from the horizon containers:

ansible -i inventory -m shell -a 'apt remove -y python-openssl; service apache2 restart' horizon_all

Root cause: https://github.com/shazow/urllib3/issues/367

Which is an issue which is open since 2014 without a fix.

See also: https://github.com/kennethreitz/requests/issues/2543 and http://stackoverflow.com/a/38236543

Revision history for this message
Paulo Matias (paulo-matias) wrote :

Sorry, I pasted the wrong command for removing pyopenssl. If it is removed via apt, pyOpenSSL-16.1.0 remains installed and Horizon keeps suffering from the issue. Therefore, one needs to remove pyopenssl with pip:

ansible -i inventory -m shell -a 'pip uninstall -y pyopenssl; service apache2 restart' horizon_all

Revision history for this message
Paulo Matias (paulo-matias) wrote :

If we preserve pyOpenSSL installed in the container, here is a dissected pcap dump of the Horizon container trying to contact Keystone without success: http://paste.openstack.org/show/581191/

Please note that the connection is closed by the client (Horizon) side just after the 3-way TCP handshake occurs. It is Horizon (Src: 10.0.3.121) who sends the first FIN/ACK (Frame 4).

Compare with a successful connection to Keystone: http://paste.openstack.org/show/581196/

When a successful connection occurs, Frame 4 is a TLS Client Hello sent by Horizon.

In short: closing the connection is clearly a decision of the client (Horizon) side. It is not a decision of HAProxy nor of Keystone.

Jesse suggested removing an ``horizon_endpoint_type: publicURL`` override which I had in my user vars. By doing this, dashboards accessing their respective servers seem to have become more reliable, but logging to Horizon is still unreliable. This because Horizon still uses the public endpoint for contacting Keystone when the user logs in, although the config now contains ``OPENSTACK_ENDPOINT_TYPE = 'internalURL'``.

The reason why the client sends the FIN/ACK more frequently when contacting a public IP address is still not very clear to me. Maybe this behaviour is related to the LXC NAT network it has to use to reach a public IP address. But then, why does it not happen when using Python's native SSL implementation in place of pyOpenSSL?

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

So, just to be clear 10.0.3.121 is the IP of one of your horizon containers, not the VIP for the haproxy on the internal side, right?

I could see a few things going wrong, including haproxy backend selection and hairpinning.

But at first, I agree with Jesse, having horizon directly target the VIP on the internal side, seems better. This would make your system more reliable. Then proper deterministic load balancing could be the solution for you.

BUT with all of this blob of text above, I wonder why you happen to have problems now.
This is what we need to focus on.

Could you tell me if you are using haproxy, how are your LB backend balancing defined and which IP is the 10.0.3.121, please?

Thank you.

Revision history for this message
Paulo Matias (paulo-matias) wrote :

> So, just to be clear 10.0.3.121 is the IP of one of your horizon containers, not the VIP for the haproxy on the internal side, right?

Yes, 10.0.3.121 is the IP of the eth0 (LXC NAT) interface of one of the horizon containers.

> I could see a few things going wrong, including haproxy backend selection and hairpinning.

What confuses me is that the client has chosen to end the connection. I would understand this as indicating an external network problem only if the the client was waiting for the server to answer and a timeout occurred. But from the second capture, we see that the client was the one expected to send the next packet (the TLS Client Hello).

> But at first, I agree with Jesse, having horizon directly target the VIP on the internal side, seems better. This would make your system more reliable. Then proper deterministic load balancing could be the solution for you.

I also agree. Another advantage is that by targeting the VIP on the internal side, we avoid doing NAT.

> Could you tell me if you are using haproxy, how are your LB backend balancing defined and which IP is the 10.0.3.121, please?

I'm using haproxy installed by the OSA playbook. Excerpts from the config are here: http://paste.openstack.org/show/582291/

Due to Horizon not obeying my request to use internal endpoints when connecting to Keystone, the path followed by packets should look like:

Horizon container -> [eth0 (10.0.3.x)] -> Physical host --NAT--> [br-ex (200.136.213.x)] -> Switch -> [br-ex (200.136.213.5)] -> Haproxy -> [br-mgmt (172.29.235.x)] -> Switch -> [br-mgmt (172.29.235.x)] -> Physical host --bridge--> [eth1 (172.29.235.x)] -> Keystone container

Yes, it is very suboptimal. If we could get Horizon to use the internal VIP, we would avoid NAT completely.

If I understand correctly, the hairpinning hypothesis could be tested by shutting down the horizon container contained in the 200.136.213.5 machine. I already tried to let only a single horizon container running at a time, and it didn't solve the issue, so probably it isn't related to hairpinning?

Thank you very much for taking your time to analyse this issue.

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

Hello Paulo Matias,

It looks like a configuration issue. Would you mind dropping your variables/o_u_c files somewhere?

Thank you in advance.

tags: added: newton-rc-potential
Changed in openstack-ansible:
importance: Undecided → High
Revision history for this message
Alexandra Settle (alexandra-settle) wrote :

Awaiting confirmation.

Changed in openstack-ansible:
status: New → Incomplete
Revision history for this message
Alexandra Settle (alexandra-settle) wrote :

Requiring more info to proceed with confirmation, marking Incomplete.

Revision history for this message
Paulo Matias (paulo-matias) wrote :

I had the following settings in user variables which were originally put there because they were recommended by https://gist.github.com/odyssey4me/4af6a759b7ce1a4df9b36df412f57f0a#file-user_variables-yml-L38

---
# Horizon configuration
horizon_keystone_endpoint: "{{ keystone_service_publicurl }}/v3"
horizon_keystone_host: "{{ horizon_server_name }}"
---

Problem is that this causes traffic from Horizon to Keystone to pass through the LXC NAT, which causes issues just like the ones which were caused by ``horizon_endpoint_type: publicURL``.

Removing these appears to have solved the problem, however we should take note that if we get to observe further strange behaviour with the openstack SSL clients in the future, we should check if pyOpenSSL is not the culprit.

Evrard, I can still share with you our entire /etc/openstack_deploy. It is segmented in a few files, so I think it is easier to give you a link to a tgz or even read access to our git repository. I will ping you at IRC to ask about that.

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

After analysis with Paulo Matias, we've decided to drop this issue.
Here the removal of the horizon_endpoint_type: publicURL from the user_variables fixed the issue.

Changed in openstack-ansible:
status: Incomplete → Won't Fix
Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

This issue could be re-opened if any other user is seeing the same issue with horizon configured with public urls + keepalived + haproxy. Please drop a message here.

Revision history for this message
György Szombathelyi (gyurco) wrote :

Just a note: downgrading python-cryptography and pyopenssl works.

Details:
https://github.com/DoclerLabs/openstack/wiki

Revision history for this message
Chris Martin (6-chris-z) wrote :

Affects me!
- Newton/stable pulled on Wednesday 11/2, commit 75c1384d2738cfb992064385747fca5dc23ca90d
- Greenfield deployment against three physical infrastructure hosts
- HAProxy + keepalived
- Public APIs use HTTPS
- Keystone admin API is a public URL that uses HTTPS (just like the public APIs)
- Internal APIs are left alone

It seems that Horizon only has trouble making calls to the keystone admin API, i.e. only certain pages in the dashboard trigger the bug (e.g. /identity), and only intermittently. Horizon connects to (non-SSL) internal APIs for everything else, which do not trigger the bug.

Workaround: I downgraded pyOpenSSL and Cryptography in Horizon containers to the Mitaka versions:
`ansible -i inventory -m shell -a 'pip install pyopenssl==0.15.1 cryptography==1.2.3 --ignore-installed --isolated; service apache2 restart' horizon_all`

(Newton installs pyOpenSSL 16.1.0 and cryptography 1.5. Mitaka installs pyOpenSSL 0.15.1 and Cryptography 1.2.3.)

This seems to have resolved the error, no hiccups from Horizon yet.

It would be nice if Horizon could just talk to the keystone internal API, but apparently the keystone admin API on port 35357 allows operations that the internal API on port 5000 does not [0] -- so for the moment, Horizon must share the keystone admin API with my other external application that also needs to access it, i.e. Horizon must connect to the external HTTPS endpoint.

[0] https://ask.openstack.org/en/question/67846/difference-between-keystone-port-5000-and-35357/

Revision history for this message
Chris Martin (6-chris-z) wrote :

I should note that the Ansible command for the above workaround results in a segfault on the targets, but if you run it twice, it will reinstall the pip packages. `¯\_(ツ)_/¯`

Revision history for this message
Dr. Jens Harbott (j-harbott) wrote :

We get the same issue when deploying Newton via Ubuntu UCA. Affected package versions are:

python-openssl=16.1.0-1~cloud0
python-cryptography=1.5-2~cloud0

The error goes away if similar to the Ansible workaround we downgrade to the packages from Xenial:

python-openssl=0.15.1-2build1
python-cryptography=1.2.3-1

Still not sure whether this issue could/should have a workaround in Horizon or whether it is only a packaging issue.

Revision history for this message
Dr. Jens Harbott (j-harbott) wrote :

O.k., turns out the downgrading did not help in the end. After lots of further debugging, it seems that we were hitting an instance of

https://cryptography.io/en/latest/faq/#starting-cryptography-using-mod-wsgi-produces-an-internalerror-during-a-call-in-register-osrandom-engine

and setting

    WSGIApplicationGroup %{GLOBAL}

in the apache2 config for the dashboard has resolved the issue for me. To confirm whether you are hitting the same issue, check for these lines in your dashboard log before the handshake error:

extern "Python": function Cryptography_rand_bytes() called, but @ffi.def_extern() was not called in the current subinterpreter. Returning 0.

Revision history for this message
Saverio Proto (zioproto) wrote :

I also fixed the problem with:

apt-get remove python-openssl

in ubuntu xenial.

My version was:

python-openssl 16.1.0-1~cloud0

Revision history for this message
Jean-Philippe Evrard (jean-philippe-evrard) wrote :

Could you check if adding UCA into your horizon containers would make everything work fine?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-pip_install (master)

Fix proposed to branch: master
Review: https://review.openstack.org/430208

Changed in openstack-ansible:
assignee: nobody → Jean-Philippe Evrard (jean-philippe-evrard)
status: Won't Fix → In Progress
Revision history for this message
Jimmy McCrory (jimmy-mccrory) wrote :

I can confirm this fixes this the issue.
https://bugs.launchpad.net/openstack-ansible/+bug/1624791/comments/17

Upgrading cryptography within the venv also seems to work, but that's probably not feasible because of stable/newton's upper constraint requirements.
https://github.com/openstack/requirements/blob/stable/newton/upper-constraints.txt#L95

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-pip_install (master)

Reviewed: https://review.openstack.org/430208
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-pip_install/commit/?id=fe4fa1882229c22478357c82a71a4227d868b8c5
Submitter: Jenkins
Branch: master

commit fe4fa1882229c22478357c82a71a4227d868b8c5
Author: Jean-Philippe Evrard <email address hidden>
Date: Tue Feb 7 11:26:39 2017 +0000

    Use RDO/UCA everywhere

    The pip_install role installs binary dependencies, like python-openssl
    (on Ubuntu), or pyOpenSSL (on Centos).

    By default, we didn't configure extra sources (like UCA or RDO)
    in the pip_install role. Because the repo is built using UCA or RDO
    (current default), we could have inconsistency issues:
    containers that haven't UCA or RDO applied during their role execution
    would run with a venv packaged for a different python-openssl
    (or python-cryptography).

    This commit brings the consistency by installing UCA/RDO everywhere.

    Closes-Bug: 1624791

    Change-Id: I9b5cd40b2972c93af348d4ddfde21a038cf9becc
    Signed-off-by: Jean-Philippe Evrard <email address hidden>

Changed in openstack-ansible:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-pip_install (stable/ocata)

Fix proposed to branch: stable/ocata
Review: https://review.openstack.org/431608

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-pip_install (stable/ocata)

Reviewed: https://review.openstack.org/431608
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-pip_install/commit/?id=55afc237cb253c81d135e4def7cbb230e5dbddff
Submitter: Jenkins
Branch: stable/ocata

commit 55afc237cb253c81d135e4def7cbb230e5dbddff
Author: Jean-Philippe Evrard <email address hidden>
Date: Tue Feb 7 11:26:39 2017 +0000

    Use RDO/UCA everywhere

    The pip_install role installs binary dependencies, like python-openssl
    (on Ubuntu), or pyOpenSSL (on Centos).

    By default, we didn't configure extra sources (like UCA or RDO)
    in the pip_install role. Because the repo is built using UCA or RDO
    (current default), we could have inconsistency issues:
    containers that haven't UCA or RDO applied during their role execution
    would run with a venv packaged for a different python-openssl
    (or python-cryptography).

    This commit brings the consistency by installing UCA/RDO everywhere.

    Closes-Bug: 1624791

    Change-Id: I9b5cd40b2972c93af348d4ddfde21a038cf9becc
    Signed-off-by: Jean-Philippe Evrard <email address hidden>
    (cherry picked from commit fe4fa1882229c22478357c82a71a4227d868b8c5)

tags: added: in-stable-ocata
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-pip_install (stable/newton)

Fix proposed to branch: stable/newton
Review: https://review.openstack.org/432257

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-pip_install (stable/newton)

Reviewed: https://review.openstack.org/432257
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-pip_install/commit/?id=d189881d5472922161028e6a0f3ec1d457e2a8e2
Submitter: Jenkins
Branch: stable/newton

commit d189881d5472922161028e6a0f3ec1d457e2a8e2
Author: Jean-Philippe Evrard <email address hidden>
Date: Tue Feb 7 11:26:39 2017 +0000

    Use RDO/UCA everywhere

    The pip_install role installs binary dependencies, like python-openssl
    (on Ubuntu), or pyOpenSSL (on Centos).

    By default, we didn't configure extra sources (like UCA or RDO)
    in the pip_install role. Because the repo is built using UCA or RDO
    (current default), we could have inconsistency issues:
    containers that haven't UCA or RDO applied during their role execution
    would run with a venv packaged for a different python-openssl
    (or python-cryptography).

    This commit brings the consistency by installing UCA/RDO everywhere.

    Closes-Bug: 1624791

    Change-Id: I9b5cd40b2972c93af348d4ddfde21a038cf9becc
    Signed-off-by: Jean-Philippe Evrard <email address hidden>
    (Manually cherry picked from commit
    fe4fa1882229c22478357c82a71a4227d868b8c5)

tags: added: in-stable-newton
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-pip_install 14.1.0

This issue was fixed in the openstack/openstack-ansible-pip_install 14.1.0 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-pip_install 15.0.0.0rc2

This issue was fixed in the openstack/openstack-ansible-pip_install 15.0.0.0rc2 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-pip_install 16.0.0.0b1

This issue was fixed in the openstack/openstack-ansible-pip_install 16.0.0.0b1 development milestone.

Revision history for this message
Michael Craft (michaelgcraft) wrote :

On a fresh 14.2.1 install I'm hitting this bug. Making this apache2 config change did correct it for now.

https://bugs.launchpad.net/openstack-ansible/+bug/1624791/comments/17

    WSGIProcessGroup horizon
    #WSGIApplicationGroup horizon
    WSGIApplicationGroup %{GLOBAL}

Revision history for this message
Gary W. Smith (gary-w-smith) wrote :

Closing the horizon but since this was fixed without a horizon change

Changed in horizon:
status: New → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-os_horizon (master)

Reviewed: https://review.openstack.org/526423
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-os_horizon/commit/?id=dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2
Submitter: Zuul
Branch: master

commit dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2
Author: Adrien Cunin <email address hidden>
Date: Thu Dec 7 15:46:01 2017 +0100

    Set WSGIApplicationGroup %{GLOBAL} as recommended

    mod_wsgi hangs trying to import the recent versions of
    python-gobject-base used by python-keyring library, which is in turn
    used by python-keystoneclient. This does not happen if the
    WSGIApplicationGroup is global.

    Change-Id: I4c7408699fddf327feb1c3b47e8e47cf2dd946f1
    Closes-Bug: #1708655
    Closes-Bug: #1624791
    Related-Bug: #1700176

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-os_horizon (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/529033

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-os_horizon (stable/ocata)

Fix proposed to branch: stable/ocata
Review: https://review.openstack.org/529034

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-os_horizon (stable/newton)

Fix proposed to branch: stable/newton
Review: https://review.openstack.org/529035

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-os_horizon (stable/newton)

Reviewed: https://review.openstack.org/529035
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-os_horizon/commit/?id=9dda4788e6bc6472cff4bd1068cecb39eeba02a3
Submitter: Zuul
Branch: stable/newton

commit 9dda4788e6bc6472cff4bd1068cecb39eeba02a3
Author: Adrien Cunin <email address hidden>
Date: Thu Dec 7 15:46:01 2017 +0100

    Set WSGIApplicationGroup %{GLOBAL} as recommended

    mod_wsgi hangs trying to import the recent versions of
    python-gobject-base used by python-keyring library, which is in turn
    used by python-keystoneclient. This does not happen if the
    WSGIApplicationGroup is global.

    Change-Id: I4c7408699fddf327feb1c3b47e8e47cf2dd946f1
    Closes-Bug: #1708655
    Closes-Bug: #1624791
    Related-Bug: #1700176
    (cherry picked from commit dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-os_horizon (stable/pike)

Reviewed: https://review.openstack.org/529033
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-os_horizon/commit/?id=41d8d7f055c58390586c0fc6991e8ca265299b78
Submitter: Zuul
Branch: stable/pike

commit 41d8d7f055c58390586c0fc6991e8ca265299b78
Author: Adrien Cunin <email address hidden>
Date: Thu Dec 7 15:46:01 2017 +0100

    Set WSGIApplicationGroup %{GLOBAL} as recommended

    mod_wsgi hangs trying to import the recent versions of
    python-gobject-base used by python-keyring library, which is in turn
    used by python-keystoneclient. This does not happen if the
    WSGIApplicationGroup is global.

    Change-Id: I4c7408699fddf327feb1c3b47e8e47cf2dd946f1
    Closes-Bug: #1708655
    Closes-Bug: #1624791
    Related-Bug: #1700176
    (cherry picked from commit dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2)

tags: added: in-stable-pike
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-os_horizon (stable/ocata)

Reviewed: https://review.openstack.org/529034
Committed: https://git.openstack.org/cgit/openstack/openstack-ansible-os_horizon/commit/?id=8fe29e992509983b3a672a644b87124666656d2a
Submitter: Zuul
Branch: stable/ocata

commit 8fe29e992509983b3a672a644b87124666656d2a
Author: Adrien Cunin <email address hidden>
Date: Thu Dec 7 15:46:01 2017 +0100

    Set WSGIApplicationGroup %{GLOBAL} as recommended

    mod_wsgi hangs trying to import the recent versions of
    python-gobject-base used by python-keyring library, which is in turn
    used by python-keystoneclient. This does not happen if the
    WSGIApplicationGroup is global.

    Change-Id: I4c7408699fddf327feb1c3b47e8e47cf2dd946f1
    Closes-Bug: #1708655
    Closes-Bug: #1624791
    Related-Bug: #1700176
    (cherry picked from commit dfbc2a56b6fd0cf0a98fe6a2dc046a44c30a05b2)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-os_horizon 17.0.0.0b3

This issue was fixed in the openstack/openstack-ansible-os_horizon 17.0.0.0b3 development milestone.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-os_horizon 16.0.7

This issue was fixed in the openstack/openstack-ansible-os_horizon 16.0.7 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-os_horizon 15.1.15

This issue was fixed in the openstack/openstack-ansible-os_horizon 15.1.15 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-os_horizon 17.0.0.0rc1

This issue was fixed in the openstack/openstack-ansible-os_horizon 17.0.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/openstack-ansible-os_horizon 14.2.15

This issue was fixed in the openstack/openstack-ansible-os_horizon 14.2.15 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.