Traceback error on zodb on KARL OSI STAGING

Bug #554044 reported by JimPGlenn
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
KARL3
Invalid
Medium
Chris Rossi

Bug Description

Traceback error when running evolve --latest on osi STAGING!!

Convert wiki page to folder: /communities/the-globetrotter/wiki/liberia
Convert wiki page to folder: /communities/the-globetrotter/wiki/mamba-point-hotel
No handlers could be found for logger "ZODB.Connection"
Traceback (most recent call last):
  File "bin/evolve", line 88, in <module>
    karl.scripts.evolve.main()
  File "/opt/karl3/src/karl/karl/scripts/evolve.py", line 99, in main
    evolve_to_latest(manager)
  File "/opt/karl3/eggs/repoze.evolution-0.3-py2.5.egg/repoze/evolution/__init__.py", line 90, in evolve_to_latest
    manager.evolve_to(version)
  File "/opt/karl3/eggs/repoze.evolution-0.3-py2.5.egg/repoze/evolution/__init__.py", line 62, in evolve_to
    evmodule.evolve(self.context)
  File "/opt/karl3/src/karl/karl/evolve/zodb/evolve7.py", line 33, in evolve
    catalog.reindex_doc(obj.docid, obj)
  File "/opt/karl3/src/karl/karl/models/catalog.py", line 47, in reindex_doc
    super(CachingCatalog, self).reindex_doc(*arg, **kw)
  File "/opt/karl3/eggs/repoze.catalog-0.7.0-py2.5.egg/repoze/catalog/catalog.py", line 46, in reindex_doc
    index.reindex_doc(docid, obj)
  File "/opt/karl3/eggs/repoze.catalog-0.7.0-py2.5.egg/repoze/catalog/indexes/text.py", line 23, in reindex_doc
    return self.index_doc(docid, object)
  File "/opt/karl3/eggs/repoze.catalog-0.7.0-py2.5.egg/repoze/catalog/indexes/common.py", line 33, in index_doc
    return super(CatalogIndex, self).index_doc(docid, value)
  File "/opt/karl3/eggs/zope.index-3.6.0-py2.5-linux-x86_64.egg/zope/index/text/textindex.py", line 50, in index_doc
    self.index.index_doc(docid, text)
  File "/opt/karl3/eggs/zope.index-3.6.0-py2.5-linux-x86_64.egg/zope/index/text/okapiindex.py", line 228, in index_doc
    count = BaseIndex.index_doc(self, docid, text)
  File "/opt/karl3/eggs/zope.index-3.6.0-py2.5-linux-x86_64.egg/zope/index/text/baseindex.py", line 96, in index_doc
    return self._reindex_doc(docid, text)
  File "/opt/karl3/eggs/zope.index-3.6.0-py2.5-linux-x86_64.egg/zope/index/text/okapiindex.py", line 234, in _reindex_doc
    return BaseIndex._reindex_doc(self, docid, text)
  File "/opt/karl3/eggs/zope.index-3.6.0-py2.5-linux-x86_64.egg/zope/index/text/baseindex.py", line 143, in _reindex_doc
    self._add_wordinfo(wid, newscore, docid)
  File "/opt/karl3/eggs/zope.index-3.6.0-py2.5-linux-x86_64.egg/zope/index/text/baseindex.py", line 253, in _add_wordinfo
    doc2score = self._wordinfo.get(wid)
  File "/opt/karl3/eggs/ZODB3-3.8.5-py2.5-linux-x86_64.egg/ZODB/Connection.py", line 815, in setstate
    self._setstate(obj)
  File "/opt/karl3/eggs/ZODB3-3.8.5-py2.5-linux-x86_64.egg/ZODB/Connection.py", line 874, in _setstate
    self._reader.setGhostState(obj, p)
  File "/opt/karl3/eggs/ZODB3-3.8.5-py2.5-linux-x86_64.egg/ZODB/serialize.py", line 604, in setGhostState
    state = self.getState(pickle)
  File "/opt/karl3/eggs/ZODB3-3.8.5-py2.5-linux-x86_64.egg/ZODB/serialize.py", line 597, in getState
    return unpickler.load()
  File "/opt/karl3/eggs/ZODB3-3.8.5-py2.5-linux-x86_64.egg/ZODB/serialize.py", line 469, in _persistent_load
    return self.load_persistent(*reference)
  File "/opt/karl3/eggs/ZODB3-3.8.5-py2.5-linux-x86_64.egg/ZODB/serialize.py", line 513, in load_persistent
    self._cache[oid] = obj
MemoryError

JimPGlenn (jpglenn09)
Changed in karl3:
importance: Undecided → High
importance: High → Medium
assignee: nobody → Paul Everitt (paul-agendaless)
Revision history for this message
Paul Everitt (paul-agendaless) wrote :

Another super-fun one for Chris.

Changed in karl3:
assignee: Paul Everitt (paul-agendaless) → Chris Rossi (chris-archimedeanco)
Revision history for this message
Chris Rossi (chris-archimedeanco) wrote :

I can see from the traceback that this is caused by the server running out of memory. In terms of the stack trace itself, there is nothing to fix, since it is a machine resource issue. Staging had to be restarted today because it had crashed completely due to RAM overconsumption.

I think what triggered the problem was spinning up the "experimental" OSI instance. ZODB seems to require a certain amount of RAM for each process (if not thread) that is proportional to the size of the database. Because OSI is such a large database, this means a considerable amount of RAM. I'm seeing, on staging, between 240M to 444M per process. This includes not just the mod_wsgi processes, but the various and sundry "daemon" processes. Obviously this is too much RAM and spinning up the second OSI instance just tipped the box over the edge.

For now I am running only the two OSI instances on the machine and the "experimental" instance has its daemons turned off. This has RAM utilization hovering just over the edge of swapping. (Staging has 2G RAM and 1G of swap.) This will allow us to evaluate the pgtextindex thing if we like, for now, but something needs to be done.

If it's ok with you, Paul, on Monday I'd like to dig into why ZODB needs so much RAM just to open a connection, if the database is fairly large. I'm not an expert on tuning ZODB, but something clearly needs to be done, so I might as well get my hands dirty and try to figure this out. It may also be time to rethink the daemons.

I'm marking this ticket as invalid. If we should create a ticket for memory management, then let's make a new ticket for that.

Changed in karl3:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.