cffi mismatch causing "cannot import name greenthread"

Bug #1289345 reported by Sergey Lukjanov
46
This bug affects 12 people
Affects Status Importance Assigned to Milestone
OpenStack Core Infrastructure
Fix Released
Medium
Sean Dague

Bug Description

The tempest-dsvm jobs failing on rax slaves.

[15:38:13] <sdague> SergeyLukjanov: so johnthetubaguy says that rax updated images yesterday

Revision history for this message
Sergey Lukjanov (slukjanov) wrote :

Attempt to disable rax slaves for devstack-precise - https://review.openstack.org/#/c/78942/

description: updated
Revision history for this message
Sean Dague (sdague) wrote :

The root of the failure is

http://logs.openstack.org/08/78808/2/check/check-tempest-dsvm-postgres-full/1d3793d/console.html

2014-03-07 08:32:31.619 | Traceback (most recent call last):
2014-03-07 08:32:31.619 | File "/usr/local/lib/python2.7/dist-packages/eventlet/__init__.py", line 5, in <module>
2014-03-07 08:32:31.619 | from eventlet import greenthread
2014-03-07 08:32:31.620 | ImportError: cannot import name greenthread

This *only* fails on RAX nodes (where it fails all the time) and *never* fails on HP nodes.

John Garbutt confirmed new RAX images were published at: <johnthetubaguy> sdague: so last updated on the 12.04 PVHVM is 2014-03-05 23:21:54 -0600

Which would mean this is the first day that we would be running nodepool updates of this image.

I *can not* replicate this issue with vanila devstack on 12.04 PVHVM on my personal RAX account.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to elastic-recheck (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/78968

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to elastic-recheck (master)

Reviewed: https://review.openstack.org/78968
Committed: https://git.openstack.org/cgit/openstack-infra/elastic-recheck/commit/?id=bdda3b3111a059b670420550dc9fe3ad7f004c94
Submitter: Jenkins
Branch: master

commit bdda3b3111a059b670420550dc9fe3ad7f004c94
Author: Sean Dague <email address hidden>
Date: Fri Mar 7 09:08:21 2014 -0500

    query for bad rax image

    Change-Id: I33d4b0947f402524d74a524a521e5fb1047ab318
    Related-Bug: #1289345

Revision history for this message
Jeremy Stanley (fungi) wrote : Re: tempest-dsvm jobs failing on rax slaves

This turns out to have been an adverse interaction between cffi 0.8.2 (just released today) getting pulled into a virtualenv by tempest because it was incorrectly removing our mirror overrides, and 0.8.1 which was getting installed into the system context. Tox for tempest was using site_packages=true and as best as we could tell from tracebacks pycrypto was compiled against one but attempting to use the other. This bubbled up through eventlet.greenthreads but the real problem was:

Type "help", "copyright", "credits" or "license" for more information.
>>> from eventlet import greenthread
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/eventlet/__init__.py", line 10, in <module>
    from eventlet import convenience
  File "/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py", line 3, in <module>
    from eventlet import greenio
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio.py", line 542, in <module>
    from OpenSSL import SSL
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
    from OpenSSL import rand, crypto, SSL
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in <module>
    from OpenSSL._util import (
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py", line 4, in <module>
    binding = Binding()
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 83, in __init__
    self._ensure_ffi_initialized()
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 99, in _ensure_ffi_initialized
    libraries)
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 72, in build_ffi
    ext_package="cryptography",
  File "/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/cffi/api.py", line 341, in verify
    lib = self.verifier.load_library()
  File "/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/cffi/verifier.py", line 73, in load_library
    self._write_source()
  File "/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/cffi/verifier.py", line 125, in _write_source
    file = open(self.sourcefilename, 'w')
IOError: [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/__pycache__/_cffi__x5eaa210axf0ae7e21.c'

The reason we didn't seem to hit it on hpcloud workers is that their base images come preinstalled by the provider with pycrypto from a DEB package, which was preventing this convoluted circumstance. The issue subsided once the pip mirror was updated to include cffi 0.8.2 such that devstack began installing the same release on the system as tempest was pulling directly from pypi.python.org. Separately the issue causing tempest to hit PyPI directly (for months now probably) is mostly fixed by https://review.openstack.org/78987 but something in grenade didn't quite get solved by that so investigation should probably continue there.

Changed in openstack-ci:
status: New → In Progress
summary: - tempest-dsvm jobs failing on rax slaves
+ cffi mismatch causing "cannot import name greenthread"
Changed in openstack-ci:
importance: Undecided → Medium
milestone: none → icehouse
status: In Progress → Fix Released
assignee: nobody → Sean Dague (sdague)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.