Collective zc.buildout recipes

zeopack fails

Reported by René Fleschenberg on 2008-02-26
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
collective.buildout
Medium
Unassigned

Bug Description

On one of my machines, running e.g. bin/zeopack -p 8100 -d 7 fails with the following error:

Unhandled exception in thread started by
Error in sys.excepthook:

Original exception was:

Yes, that is the full actual error message.

Plone 3.0.6, Python 2.4.4 on Debian.

If you need any more information, I'd be glad to provide it.

René Fleschenberg (rene.f) wrote :

The bug seems to be due to a corrupted ZODB and unrelated to buildout.

Changed in collective.buildout:
status: New → Invalid
René Fleschenberg (rene.f) wrote :

Reopening. My prior conclusions were wrong. After investigating the issue a little more with the kind help of Darryl Dixon, it seems that the error message quoted above is shown everytime zeopack runs, even though the packing is actually successful. I can now reproduce this on two different machines.

Changed in collective.buildout:
status: Invalid → New
Hanno Schlichting (hannosch) wrote :

I have seen this happening as well :(

Changed in collective.buildout:
importance: Undecided → Medium
status: New → Confirmed
peridoc (kschonrock) wrote :

I am seeing this too with Plone 3.1.5.1 and Zope 2.10.6. Are there any ways to correct the issue? I am getting these errors reported every time that I run zeopack in a cron.

Fred Condo (fcondo) wrote :

For what it's worth, I am seeing this error as well with this configuration:

    * Plone 3.2
    * CMF 2.1.2
    * Zope (Zope 2.10.6-final, python 2.4.5, linux2)
    * Python 2.4.5 (#1, Feb 6 2009, 10:48:55) [GCC 3.4.6 20060404 (Red Hat 3.4.6-10)]
    * PIL 1.1.6

(production mode)

Ben Liles (bliles) wrote :

I'm also running into this problem. Has anyone figured anything out?

I am seeing this too.
* Plone 3.3.5
* Python 2.4
* Zope 2.10

I ran it through pdb & it finished without an error. I also noticed that a temporary "pack" file was being manipulated in the filestorage directory shortly after the command finished - before the new Data.fs is created.

Which all makes me think there's a multi-threaded race condition somewhere in this issue.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers