Comment 2 for bug 723480

Revision history for this message
Marco Romeny (marco-mimecom) wrote :

I am thinking that there will be many more virtual servers running in the wild very soon rather than physical servers, and I have a hunch that a lot of them are configured as 1Gb or even 512Mb -- if the defaults are too high, it results in a server that crashes after some time (it took mine about a day to crash) and at least for me it was not apparent where to find the error. I know, I should have done the math, but I somehow thought the defaults would be pre-calculated to fit a really small machine. In fact, one of my first errors I did, was to increase the limit to 100 children under that very assumption, and it definitely maxed my server out. It might even have been so that if I hadn't done that error I would have rebooted the server once a week and never found the problem.

I could blame this on php too: before 5.2.0 max_memory was 8Mb default, 5.2.0 it became 16Mb and then recently it jumped to 128Mb. That's a large step to take.

Now, I'm not sure if 128Mb for a php process is really necessary (although I know that wordpress loves memory), but in any case, I rather have a underutilized server by default than a server that slowly dives into dementia by default.

I even think 4 might be too many as the max_children if the minimum requirements for ubuntu server is listed at 256Mb. That number pretty much says "you can run most stuff comfortably with 512Mb"

As for if it's better to have many concurrent serving php instances vs a lot of memory allocated to each instance is probably depending an the application.