Comment 5 for bug 1352745

Revision history for this message
Paul Everitt (paul-agendaless) wrote : Re: [Bug 1352745] Change buildout to not re-build libmemcached

Given that putting thought into it was actually the problem last time (too ambitious on the combination), I agree to go with a combination you have confidence in.

Those bullets look fine. It's changing the buildout in osideploy so it won't affect developer buildouts, which is good. Thanks!

--Paul

> On Mar 18, 2015, at 1:46 PM, Chris McDonough <email address hidden> wrote:
>
> On another project I used a combination of libmemcached-1.0.16 and
> pylibmc-1.3.0 successfuly (on CentOS 6). There's no thought put into
> upgrading to latest and greatest here, I just happen to know both of
> these work together. I have logged in to karlstaging and both
> libmemcached-1.0.16 and pylibmc-1.3.0 seem to build OK there. I think
> what I'm going to try to do is:
>
> - Add code to osideploy which puts a version of libmemcached-1.0.16 into /srv/{whatever}/opt/libmemcached-1.0.16
> via "./configure --prefix=/srv/{whatever}/opt/libmemcached-1.0.16 --without-memcached" if it doesn't
> already exist there.
>
> - Remove the cmmi recipe for libmemcached from the buildout.cfg
> generated by osideploy app server template.
>
> - Change the recipe for the pylibmc egg in the buildout.cfg generated by osideploy app server template to use an
> -rpath of "/srv/{whatever}/opt/libmemcached-1.0.16" and update the version of pylibmc in the version pins to 1.3.0.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1352745
>
> Title:
> Change buildout to not re-build libmemcached
>
> Status in OSF KARL4:
> Confirmed
>
> Bug description:
> We're not going to do all the work originally described below.
> Instead, for now, we're only going to take libmemached out of the
> buildout and rely on it being in the system. This might mean an update
> to pylibmc.
>
> #####
>
> Our production (and staging) updates now take too long. It's long
> enough to warrant some initial investigation. For this task:
>
> - Watch the next karlstaging build/run
> - Identify some low-hanging fruit on what is taking a long time
> - Give some recommendations on speeding up
>
> As an example, I believe we recompile memcache (or redis?) on each of
> the four VMs, each time we update. Is there a lib that is missing
> from the gocept host environment? Or perhaps we're pointing to a
> private copy?
>
> Other ideas:
>
> - Are we going to our private package repo on github for everything?
> Can we reduce network latency by using pypi.gocept.net?
>
> Don't spend more than an hour on this.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/karl4/+bug/1352745/+subscriptions