impossible to create nova instances after upgrading to rocky

Bug #1812672 reported by Junien F
18
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Nova Cloud Controller Charm
Fix Released
Critical
Sahid Orentino
Ubuntu Cloud Archive
Fix Released
High
Corey Bryant
oslo.cache
Invalid
Undecided
Herve Beraud
nova (Ubuntu)
Fix Released
Critical
Unassigned
python-oslo.cache (Ubuntu)
Fix Released
Critical
Unassigned

Bug Description

Hi,

I'm using bionic with 18.11 charms. I recently upgraded openstack from queens to rocky. After that, I was unable to create nova instances - they were stuck in BUILD state, without any error in nova-cloud-controller, neutron-api or keystone logs.

While investigating, I noticed that "openstack compute service list" was empty, and this was generating an error in nova-api-os-compute.log (see below for the traceback).

My investigations lead to https://github.com/openstack/oslo.cache/blob/master/oslo_cache/_memcache_pool.py#L48 being the problem. If I comment out this line, then "openstack service list" works fine, and I can create instances without problem. I however don't know the consequences in the long run.

Please advise.

Thanks !

nova-api-os-compute.log traceback when running "openstack service list" :

2019-01-21 10:45:43.729 87283 ERROR nova.context Traceback (most recent call last):
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_cache/_memcache_pool.py", line 163, in _get
2019-01-21 10:45:43.729 87283 ERROR nova.context conn = self.queue.pop().connection
2019-01-21 10:45:43.729 87283 ERROR nova.context IndexError: pop from an empty deque
2019-01-21 10:45:43.729 87283 ERROR nova.context
2019-01-21 10:45:43.729 87283 ERROR nova.context During handling of the above exception, another exception occurred:
2019-01-21 10:45:43.729 87283 ERROR nova.context
2019-01-21 10:45:43.729 87283 ERROR nova.context Traceback (most recent call last):
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/nova/context.py", line 438, in gather_result
2019-01-21 10:45:43.729 87283 ERROR nova.context result = fn(cctxt, *args, **kwargs)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_versionedobjects/base.py", line 184, in wrapper
2019-01-21 10:45:43.729 87283 ERROR nova.context result = fn(cls, context, *args, **kwargs)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/nova/objects/service.py", line 601, in get_all
2019-01-21 10:45:43.729 87283 ERROR nova.context context, db_services)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/nova/availability_zones.py", line 88, in set_availability_zones
2019-01-21 10:45:43.729 87283 ERROR nova.context service['host'], az)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/nova/availability_zones.py", line 108, in update_host_availability_zone_cache
2019-01-21 10:45:43.729 87283 ERROR nova.context cache.delete(cache_key)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/nova/cache_utils.py", line 122, in delete
2019-01-21 10:45:43.729 87283 ERROR nova.context return self.region.delete(key)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/dogpile/cache/region.py", line 1002, in delete
2019-01-21 10:45:43.729 87283 ERROR nova.context self.backend.delete(key)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/dogpile/cache/backends/memcached.py", line 188, in delete
2019-01-21 10:45:43.729 87283 ERROR nova.context self.client.delete(key)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_cache/backends/memcache_pool.py", line 31, in _run_method
2019-01-21 10:45:43.729 87283 ERROR nova.context with self.client_pool.acquire() as client:
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
2019-01-21 10:45:43.729 87283 ERROR nova.context return next(self.gen)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_cache/_memcache_pool.py", line 127, in acquire
2019-01-21 10:45:43.729 87283 ERROR nova.context conn = self.get(timeout=self._connection_get_timeout)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/eventlet/queue.py", line 295, in get
2019-01-21 10:45:43.729 87283 ERROR nova.context return self._get()
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_cache/_memcache_pool.py", line 214, in _get
2019-01-21 10:45:43.729 87283 ERROR nova.context conn = ConnectionPool._get(self)
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_cache/_memcache_pool.py", line 165, in _get
2019-01-21 10:45:43.729 87283 ERROR nova.context conn = self._create_connection()
2019-01-21 10:45:43.729 87283 ERROR nova.context File "/usr/lib/python3/dist-packages/oslo_cache/_memcache_pool.py", line 206, in _create_connection
2019-01-21 10:45:43.729 87283 ERROR nova.context return _MemcacheClient(self.urls, **self._arguments)
2019-01-21 10:45:43.729 87283 ERROR nova.context TypeError: object() takes no parameters
2019-01-21 10:45:43.729 87283 ERROR nova.context

Junien F (axino)
Changed in cloud-archive:
importance: Undecided → Critical
Revision history for this message
Junien F (axino) wrote :

Note that I have not been able to reproduce this outside of the nova-api-os-compute context. The following works fine in python3 REPL :

$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from oslo_cache import _memcache_pool
>>> a=_memcache_pool.MemcacheClientPool(urls="foo", arguments={}, maxsize=10, unused_timeout=10)
>>> a._create_connection()
<oslo_cache._memcache_pool._MemcacheClient object at 0x7f3842588528>

Revision history for this message
Sahid Orentino (sahid-ferdjaoui) wrote :

I can't find so far any relevant detail which could help to fix the issue.

The package versions used for Queens is oslo.cache 1.28 and for Rocky oslo.cache 1.30 I did not find any relevant changes. On Bionic we ship python-memcache 1.57.

I may need try to reproduce the issue locally since it's not clear for me why commenting [0] is fixing the it.

Did you try to restart the service without to comment line 48? also if you have a chance to clean all *.pyc from /usr/lib/python3/dist-packages/oslo_cache/ then restart the service (again without commenting anything).

In the meantime I'm continuing my investigation.

[0] https://github.com/openstack/oslo.cache/blob/master/oslo_cache/_memcache_pool.py#L48

James Page (james-page)
Changed in cloud-archive:
assignee: nobody → James Page (james-page)
assignee: James Page (james-page) → nobody
assignee: nobody → Sahid Orentino (sahid-ferdjaoui)
Revision history for this message
Junien F (axino) wrote :

Rephrasing my comment.

Yes I did restart the services without commenting line 48, it didn't help.

I just tried removing the .pyc files in /usr/lib/python3/dist-packages/oslo_cache/, uncommenting line 48 and restarting the services, it still fails.

> The package versions used for Queens is oslo.cache 1.28 and for Rocky oslo.cache 1.30 I did not find any relevant changes. On Bionic we ship python-memcache 1.57.

There is one BIG difference between queens and rocky : in queens, nova services were running with python2. In rocky, they're running with python3.

Revision history for this message
Sahid Orentino (sahid-ferdjaoui) wrote :

I understand your point regarding Python versions but still I don't see any incompatible changes and commenting that line should not make differences.

I noticed something strange in logs that you shared to use. The memcache module used is coming from a package called "dogpile" /usr/lib/python3/dist-packages/dogpile where it should normally come from /usr/lib/python3/dist-packages/memcache.py

Can you try to log in a Python prompt, import "memcache" and then print the version and path?

My thinking is that there is a package conflict.

You should have something like:

(.py3) ubuntu@bug1812672:~$ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import memcache
>>> memcache.__version__
'1.57'
>>> memcache.__file__
'/usr/lib/python3/dist-packages/memcache.py'

(.py2) ubuntu@bug1812672:~$ python
Python 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import memcache
>>> memcache.__version__
'1.57'
>>> memcache.__file__
'/usr/lib/python2.7/dist-packages/memcache.pyc'

Revision history for this message
Junien F (axino) wrote :

Looks good :

$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import memcache
>>> memcache.__version__
'1.57'
>>> memcache.__file__
'/usr/lib/python3/dist-packages/memcache.py'

dogpile is just a caching API

Revision history for this message
Sahid Orentino (sahid-ferdjaoui) wrote :

Can you share nova.conf of controller? I would say you have configured 'memcached_servers'. Our charm (charm-nova-cloud-controller) is configuring the backend to use 'oslo_cache.memcache_pool' where based on nova/cache_utils.py, the right value nowadays would be 'dogpile.cache.memcached'.

If possible can you update nova.conf like (Please ensure to restart the service):

[cache]
backend = dogpile.cache.memcached

I'm not sure that 'oslo_cache.memcache_pool' is still well maintained.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

It seems we should be ok with oslo_cache.memcache_pool. Check the documentation at (search for "backend =") at: https://docs.openstack.org/nova/rocky/configuration/sample-config.html

Let's try to recreate this and narrow down on the issue. I wouldn't be surprised if it turns out to be a very simple upstream fix for py3.

Revision history for this message
Junien F (axino) wrote :
Revision history for this message
Corey Bryant (corey.bryant) wrote :

I was able to recreate this with juju by relating nova-cloud-controller to memcached.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

To add to comment #8, that of course was a queens deployment, created an instance, upgraded nova-cc to rocky, then 'openstack server list' errored out and nova-api-os-compute.log contains the traceback: https://paste.ubuntu.com/p/nj93928wXt/ (should be the same as what Junien posted but posting again as the formatting is better).

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This is an interesting one and proving difficult to debug. It seems like the overrides in _MemcacheClient are having unintended side-effects with python3. I'll look some more tomorrow but for now adding upstream to the bug. I wonder if we can get Alexander Makarov to take a look as he added this code segment way back in 2015.

class _MemcacheClient(memcache.Client):
    """Thread global memcache client

    As client is inherited from threading.local we have to restore object
    methods overloaded by threading.local so we can reuse clients in
    different threads
    """
    __delattr__ = object.__delattr__
    __getattribute__ = object.__getattribute__
    __new__ = object.__new__
    __setattr__ = object.__setattr__

    def __del__(self):
        pass

Changed in python-oslo.cache (Ubuntu):
importance: Undecided → Critical
status: New → Triaged
Changed in cloud-archive:
status: New → Triaged
Revision history for this message
Corey Bryant (corey.bryant) wrote :

Note that commenting out __new__ = object.__new__ above does seem to "fix" things but I'm not sure if that is the correct fix.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

There's a big difference in the base classes of _MemcacheClient that are resolved at run-time when comparing Junien's testcase in comment #1 vs the code being executed via nova-api-os-compute in the failure case. Also showing arguments here for the failing case since I have them handy:

Successful testcase:
--------------------
CCB inspect.getmro(_MemcacheClient)=(<class 'oslo_cache._memcache_pool._MemcacheClient'>, <class 'memcache.Client'>, <class '_thread._local'>, <class 'object'>)
<oslo_cache._memcache_pool._MemcacheClient object at 0x7f984ddd04c8>

Failure via nova-api-os-compute:
--------------------------------
self._arguments={'dead_retry': 300, 'socket_timeout': 3.0}
self.urls=['10.5.0.117:11211']
inspect.getmro(_MemcacheClient)=(<class 'oslo_cache._memcache_pool._MemcacheClient'>, <class 'memcache.Client'>, <class 'eventlet.corolocal.local'>, <class 'eventlet.corolocal._localbase'>, <class 'object'>)

Revision history for this message
Corey Bryant (corey.bryant) wrote :

I'm guessing this is something to do with eventlet monkey patching.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This recreates with Junien's testcase + eventlet.monkey_patch():

ubuntu@juju-b1ca57-coreycb2-19:~$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import eventlet
>>> eventlet.monkey_patch()
>>> from oslo_cache import _memcache_pool
>>> a=_memcache_pool.MemcacheClientPool(urls="foo", arguments={}, maxsize=10, unused_timeout=10)
>>> a._create_connection()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3/dist-packages/oslo_cache/_memcache_pool.py", line 214, in _create_connection
    return _MemcacheClient(self.urls, **self._arguments)
TypeError: object() takes no parameters

And the base classes correspond to the nova-api-os-compute failure case:
inspect.getmro(_MemcacheClient)=(<class 'oslo_cache._memcache_pool._MemcacheClient'>, <class 'memcache.Client'>, <class 'eventlet.corolocal.local'>, <class 'eventlet.corolocal._localbase'>, <class 'object'>)

Revision history for this message
Corey Bryant (corey.bryant) wrote :

In Ubuntu we should probably be running nova-api-os-compute under apache2 with mod_wsgi anyway and as a bonus it will bypass this issue. I just tested successfully with the attached site enabled. Add /etc/apache2/sites-enabled/wsgi-openstack-api-os-compute.conf; sudo systemctl stop nova-api-os-compute && sudo systemctl restart apache2.

Changed in charm-nova-cloud-controller:
status: New → Triaged
importance: Undecided → Critical
Changed in cloud-archive:
status: Triaged → Fix Committed
Changed in charm-nova-cloud-controller:
assignee: nobody → Sahid Orentino (sahid-ferdjaoui)
Changed in cloud-archive:
assignee: Sahid Orentino (sahid-ferdjaoui) → nobody
assignee: nobody → Corey Bryant (corey.bryant)
Changed in charm-nova-cloud-controller:
status: Triaged → In Progress
Revision history for this message
Junien F (axino) wrote :

Where's the commit ? I'm curious :)

Thanks

Revision history for this message
Corey Bryant (corey.bryant) wrote :
Revision history for this message
Junien F (axino) wrote :

Oh wow ok pretty big changes. Thanks !

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Yeah sahid is focused on it. Btw thedac is going to look at your other bug.

Changed in nova (Ubuntu):
status: New → Fix Released
importance: Undecided → Critical
Revision history for this message
Corey Bryant (corey.bryant) wrote :

Package-wise this is fix-released for disco (stein). We're only switching to apache2+mod_wsgi in stein from a package pov because it's a pretty big switch to SRU, plus the rocky packages are py2 by default. https://launchpad.net/ubuntu/+source/nova/2:19.0.0~b1~git2018120609.c9dca64fa6-0ubuntu4

Changed in oslo.cache:
status: New → Confirmed
assignee: nobody → Herve Beraud (herveberaud)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-cloud-controller (master)

Reviewed: https://review.openstack.org/633482
Committed: https://git.openstack.org/cgit/openstack/charm-nova-cloud-controller/commit/?id=131497868f442bcf06bc199a36e2962b8ac0018d
Submitter: Zuul
Branch: master

commit 131497868f442bcf06bc199a36e2962b8ac0018d
Author: Sahid Orentino Ferdjaoui <email address hidden>
Date: Mon Jan 28 12:13:57 2019 +0100

    template: update conf template for placement-api

    Currently we directly use the one provided by charmhelper which does
    not allow to reuse it for an other service. In this commit we symlink
    a new template called wsgi-placement-api.conf to
    charmhelper/../wsgi-openstack-api.conf.

    The disable_package_apache2_site() call has been added in
    do_openstack_upgrade() since previously it was not necessary to have
    it during this step.

    The disable_package_apache2_site() call has been added in
    upgrade-charm to ensure that we remove old wsgi config for users which
    are already using bionic-rocky and are upgrading their charm.

    Partial-Bug: #1812672
    Change-Id: Idc3cad9304eaf9b610db20650c32cd754f016358
    Signed-off-by: Sahid Orentino Ferdjaoui <email address hidden>

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.openstack.org/633218
Committed: https://git.openstack.org/cgit/openstack/charm-nova-cloud-controller/commit/?id=fc68571c51d81fd3e9786faebce3946564920ab5
Submitter: Zuul
Branch: master

commit fc68571c51d81fd3e9786faebce3946564920ab5
Author: Sahid Orentino Ferdjaoui <email address hidden>
Date: Fri Jan 25 15:20:57 2019 +0100

    context: extend HAProxyContext for placement API

    Since we will have to generate a HAProxyContext for
    nova-compute-os-api, this change add new class
    PlacementAPIHAProxyContext which extends HAProxyContext.

    Partial-Bug: #1812672
    Change-Id: I56920e3d9c5216cdd5a8ea8b83714e65b777a78b
    Signed-off-by: Sahid Orentino Ferdjaoui <email address hidden>

Changed in charm-nova-cloud-controller:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.openstack.org/633219
Committed: https://git.openstack.org/cgit/openstack/charm-nova-cloud-controller/commit/?id=13eca55803c8320a401ccf923c29b7bfeb85d0fc
Submitter: Zuul
Branch: master

commit 13eca55803c8320a401ccf923c29b7bfeb85d0fc
Author: Sahid Orentino Ferdjaoui <email address hidden>
Date: Mon Jan 28 11:37:40 2019 +0100

    service: updates nova-api-os-compute service to use apache wsgi

    Due to an issue in python3 oslo_cache+eventlet when using
    memcached. As workaroud for Rocky it has been decided to run service
    nova-api-os-compute from systemd to apache2.

    Closes-Bug: #1812672
    Depends-On: https://review.openstack.org/#/c/633218
    Depends-On: https://review.openstack.org/#/c/633482
    Change-Id: I3bf279638c5decf1020345f3d2e876e379144997
    Signed-off-by: Sahid Orentino Ferdjaoui <email address hidden>

Revision history for this message
Herve Beraud (herveberaud) wrote :

Error occur also on stein, I already submit a fix to oslo.cache and now waiting the merge to backport it to rocky.
https://review.openstack.org/#/c/634457/

See this other one => https://bugs.launchpad.net/oslo.cache/+bug/1812935

Revision history for this message
Corey Bryant (corey.bryant) wrote :

@herveberaud, I'm still able to recreate the error with your change: https://paste.ubuntu.com/p/9vYwwm3Byw/

It's possible I'm missing something but it looks like the patch doesn't fix the issue.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-nova-cloud-controller (stable/18.11)

Fix proposed to branch: stable/18.11
Review: https://review.openstack.org/637148

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: stable/18.11
Review: https://review.openstack.org/637149

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-nova-cloud-controller (stable/18.11)

Reviewed: https://review.openstack.org/637148
Committed: https://git.openstack.org/cgit/openstack/charm-nova-cloud-controller/commit/?id=503fe150ed94d9b499137e60b90868eb86d82bcb
Submitter: Zuul
Branch: stable/18.11

commit 503fe150ed94d9b499137e60b90868eb86d82bcb
Author: Sahid Orentino Ferdjaoui <email address hidden>
Date: Fri Jan 25 15:20:57 2019 +0100

    context: extend HAProxyContext for placement API

    Since we will have to generate a HAProxyContext for
    nova-compute-os-api, this change add new class
    PlacementAPIHAProxyContext which extends HAProxyContext.

    Partial-Bug: #1812672
    Change-Id: I56920e3d9c5216cdd5a8ea8b83714e65b777a78b
    Signed-off-by: Sahid Orentino Ferdjaoui <email address hidden>
    (cherry picked from commit fc68571c51d81fd3e9786faebce3946564920ab5)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.openstack.org/637149
Committed: https://git.openstack.org/cgit/openstack/charm-nova-cloud-controller/commit/?id=9092ab63678962bb942b7e0ff03438c32147d60a
Submitter: Zuul
Branch: stable/18.11

commit 9092ab63678962bb942b7e0ff03438c32147d60a
Author: Sahid Orentino Ferdjaoui <email address hidden>
Date: Mon Jan 28 11:37:40 2019 +0100

    service: updates nova-api-os-compute service to use apache wsgi

    Due to an issue in python3 oslo_cache+eventlet when using
    memcached. As workaroud for Rocky it has been decided to run service
    nova-api-os-compute from systemd to apache2.

    Conflicts:
      hooks/nova_cc_hooks.py
        in upgrade_charm() the conflict were because i did not
        cherrypicked the template update for placement api
        (131497868f442bcf06bc199a36e2962b8ac0018d) which does not make
        sens for the purpose of a backport to stable. I removed the
        comment.

      hooks/nova_cc_utils.py
        in disable_package_apache_site() the conflict where also related
        to (131497868f442bcf06bc199a36e2962b8ac0018d).

    Closes-Bug: #1812672
    Change-Id: I3bf279638c5decf1020345f3d2e876e379144997
    Signed-off-by: Sahid Orentino Ferdjaoui <email address hidden>
    (cherry picked from commit 13eca55803c8320a401ccf923c29b7bfeb85d0fc)

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This is now fixed in the nova-cloud-controller stable/18.11 branch.

Changed in cloud-archive:
importance: Critical → High
Revision history for this message
Corey Bryant (corey.bryant) wrote :

And available in the charm store.

Changed in charm-nova-cloud-controller:
status: Fix Committed → Fix Released
James Page (james-page)
Changed in cloud-archive:
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to oslo.cache (stable/rocky)

Reviewed: https://review.opendev.org/640500
Committed: https://git.openstack.org/cgit/openstack/oslo.cache/commit/?id=caf5443de8ff4ff353d741a125f782c67f579f52
Submitter: Zuul
Branch: stable/rocky

commit caf5443de8ff4ff353d741a125f782c67f579f52
Author: Ben Nemec <email address hidden>
Date: Tue Feb 26 22:12:23 2019 +0000

    Fix memcache pool client in monkey-patched environments

    First off, this is an ugly hack, but we're dealing with code that
    essentially monkey-patches a monkey-patch. You reap what you sow.

    Per the linked bug, our connection pool client explodes on python 3
    with eventlet monkey-patching in force:

    TypeError: object() takes no parameters

    This is due to the way __new__ is overridden in the class. We need
    to strip arguments from the call before they get to object(), which
    doesn't accept args.

    Unfortunately, when we're _not_ monkey-patched, adding the new
    override implementation fails with:

    TypeError: object.__new__(_MemcacheClient) is not safe,
    use Client.__new__()

    As such, we need different implementations depending on whether we
    are monkey-patched or not. This change passes both with and without
    monkey-patching and adds a unit test that exposes the bug.

    Note that this is a temporary, backportable fix that will ultimately
    be replaced by a switch to the pymemcache library which does not
    have the threading.local problem being worked around here.

    Change-Id: I039dffadeebd0ff4479b9c870c257772c43aba53
    Partial-Bug: 1812935
    Closes-Bug: 1812672
    (cherry picked from commit f4a25f642991a7114b86f6eb7d0bac3d599953a6)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to oslo.cache (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.opendev.org/655468

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to oslo.cache (stable/queens)

Reviewed: https://review.opendev.org/655468
Committed: https://git.openstack.org/cgit/openstack/oslo.cache/commit/?id=38dcffa904c7d4b5bd61896b972c1e16477fcc56
Submitter: Zuul
Branch: stable/queens

commit 38dcffa904c7d4b5bd61896b972c1e16477fcc56
Author: Ben Nemec <email address hidden>
Date: Tue Feb 26 22:12:23 2019 +0000

    Fix memcache pool client in monkey-patched environments

    First off, this is an ugly hack, but we're dealing with code that
    essentially monkey-patches a monkey-patch. You reap what you sow.

    Per the linked bug, our connection pool client explodes on python 3
    with eventlet monkey-patching in force:

    TypeError: object() takes no parameters

    This is due to the way __new__ is overridden in the class. We need
    to strip arguments from the call before they get to object(), which
    doesn't accept args.

    Unfortunately, when we're _not_ monkey-patched, adding the new
    override implementation fails with:

    TypeError: object.__new__(_MemcacheClient) is not safe,
    use Client.__new__()

    As such, we need different implementations depending on whether we
    are monkey-patched or not. This change passes both with and without
    monkey-patching and adds a unit test that exposes the bug.

    Note that this is a temporary, backportable fix that will ultimately
    be replaced by a switch to the pymemcache library which does not
    have the threading.local problem being worked around here.

    Change-Id: I039dffadeebd0ff4479b9c870c257772c43aba53
    Partial-Bug: 1812935
    Closes-Bug: 1812672
    (cherry picked from commit f4a25f642991a7114b86f6eb7d0bac3d599953a6)
    (cherry picked from commit caf5443de8ff4ff353d741a125f782c67f579f52)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/oslo.cache 1.30.4

This issue was fixed in the openstack/oslo.cache 1.30.4 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/oslo.cache 1.28.1

This issue was fixed in the openstack/oslo.cache 1.28.1 release.

tags: added: py3
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

As the change that got all of the Cloud Archive releases marked Fix-Released was backported from master, I'm marking the Ubuntu bug task as fix-released as well

Changed in python-oslo.cache (Ubuntu):
status: Triaged → Fix Released
Revision history for this message
Takashi Kajinami (kajinamit) wrote :

This prolbme was mostly marked resolved but there is no explanation about actual action item in oslo.cache.... I'd close this now but please feel free to reopen it in case you know any legit problem in oslo.cache itself.

Changed in oslo.cache:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.