Very Large Memory usage - bug?

Bug #2002805 reported by Gene
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
dkimpy-milter
Invalid
Undecided
Unassigned

Bug Description

Need some help/guidance.

For the first time ever the primary mail server went OOM - as I comb through I found dkimpy milter is using more memory than any other process - more than double anything else on system. This occurred after reboot - and then it processed all the flushed mail from firewall which was holding sine OOM killed all mail services on primary server. Primary has 32 GB physical + swap.

As of now on primary mail server (dkim sign), dkimpy-milter is consuming 1.2 GB of memory - I am monitoring to see how it grows but this seems insanely large for what it does. On firewall (validation) there are 7 copies of dkmipy-milter and each is taking up 410 MB. Big chunk of the 16 GB physical.

For comparison unbound is 250 MB, nsd is 50 Mb, imap is 100MB

Is this normal/expected? Given that each processes a single email and then waits - and there are no 400 MB emails - would you not expect this to be much, much smaller memory footprint?

Suggestions?

thanks

GeneC

Revision history for this message
Gene (gene-arch) wrote :

The memory has not changed over the day - so I do NOT see a memory leak.

Still the mem footprint seems disproportionately large for what its doing. Do the memory usages look sensible you think?

I am still stumped by what triggered OOM last night - have created huge mem pressure on system and seen no problems at all ... dang cosmic rays perhaps ...

Revision history for this message
Scott Kitterman (kitterman) wrote :

What version of python? What version of libmilter? What version of the pymilter bindings? What version of dkimpy and dkimpy-milter?

One limitation of the design is to either sign or verify a message, the entire message has to be in memory, so if you're dealing with unusually large messages, that could be a factor.

Revision history for this message
Gene (gene-arch) wrote :

No significantly large messages that I am aware of - good chunk of email traffic volume wise is kernel and arch mail lists - nothing terribly large there I'd guess.

versions:

dkimpy-milter git head: 0540d3cb17bf4f43ed5a1f9fd8f41f9ef03346b7
dkim git head 0540d3cb17bf4f43ed5a1f9fd8f41f9ef03346b7

pymilter git head 7deec90a5983f524c308c923115e306e75581bdd

libmilter 8.17.1

Just restarted milter and with 0 messages processed, the size is 156 MB.
So perhaps it grows to soem steady state and never shrinks?

Revision history for this message
Gene (gene-arch) wrote :

checked again - size grown to 410 MB but only 35 MB resident.

Scott, do you expect mem to only grow or would you expect it to eventually trickle back down to free() and return some mem back to system?

Revision history for this message
Gene (gene-arch) wrote :

SOrry forgot

python 3.10.9

Arch is working on 3.11 but not quite ready .. hopefully soon :)

Revision history for this message
Scott Kitterman (kitterman) wrote :

I checked and my memory allocations are similar one system and less on another, both on Debian stable. I'm not sure what happens over time as I don't recall looking into it before. If it grows over time, that's almost certainly an issue in Python garbage collection or something in pymilter/libmilter, nothing specific do dkimpy-milter. I don't have any obvious suggestions here.

Revision history for this message
Gene (gene-arch) wrote :

THanks for thoughts - as of this morning here its still about 413 with 21 resident - my guess is after mail problems and when firewall flushed it got a larger than usual volume driving mem up to 1.2 G.

This definitely suggests that the mem is not going back to OS - cause likely as you suggest.
Not sure what to say but in rough terms

1) the python code (pymilter) would need to be intentionally holding cache or something to be source of problem.

2) libmilter is old sendmail c-code (very old unless someone rewrote it or something) and, while possible, seems unlikely to be the cause.

3) cpython - while not all mem is returnable moving the brk()(from my own experience) 400 odd MB would certainly be able to have a good part of it given back to system. I would be a bit surprised if cpython itself was the problem, but certainly possible. I've not read the code or run any tests.

4) glibc free - seems not likely.

I have not looked at dkimpy-milter but if you're confident the code is not holding large chunks of memory for some reason, then likely pymilter or possibly cpython.

absent intentional caching in the code the problem does seem likely to be in cpython itself - which seems strange

I suppose we could write a small test daemon which chews some memory then let cpython give it back and see if the mem is ever returned to system - that test is at least quite doable right?

Thanks Scott and Feel free to close this - happy 2023:)

Revision history for this message
Scott Kitterman (kitterman) wrote :

Thanks. Closing then.

Changed in dkimpy-milter:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.