php-fcgi: max_execution_time causes memory leaks

Bug #1677578 reported by Vasya Pupkin on 2017-03-30
262
This bug affects 1 person
Affects Status Importance Assigned to Milestone
php-imagick (Ubuntu)
Undecided
Unassigned
php7.0 (Ubuntu)
High
Unassigned

Bug Description

I noticed that php processes do not free memory after killing scripts that are running more than allowed by max_execution_time directive.

My setup is lighttpd with php configured as fcgi. Sample script (requires php-imagick) that can easily hit default 30 seconds time limit and uses a lot of memory so leaks are easy to notice:

<?php
    define('__BASE__', 'images');
    $photos = [];
    if ($album_root = opendir(__BASE__)) {
        while (false !== ($entry = readdir($album_root))) {
            if (preg_match('/\.(jpe?g|gif|png)$/i', $entry)) {
                $photos[] = $entry;
            }
        }
        closedir($album_root);
        sort($photos);
        if (!file_exists('thumbs')) {
            mkdir('thumbs');
        }
        foreach($photos as $photo) {
            $thumb = 'thumbs/'.$photo;
            if (!file_exists($thumb)) {
                $image = new \Imagick(realpath(__BASE__.'/'.$photo));
                $image->thumbnailImage(64, 48, true, false);
                $image->writeImage($thumb);
                $image->clear;
                $image = null;
            }
        }
    }
?>

Just put many big JPGs to images directory.

ProblemType: Bug
DistroRelease: Ubuntu 16.04
Package: php7.0-cgi 7.0.15-0ubuntu0.16.04.4
ProcVersionSignature: Ubuntu 4.8.0-45.48~16.04.1-generic 4.8.17
Uname: Linux 4.8.0-45-generic x86_64
ApportVersion: 2.20.1-0ubuntu2.5
Architecture: amd64
Date: Thu Mar 30 14:37:16 2017
InstallationDate: Installed on 2011-04-14 (2177 days ago)
InstallationMedia: Ubuntu-Server 10.04.2 LTS "Lucid Lynx" - Release amd64 (20110211.1)
SourcePackage: php7.0
UpgradeStatus: Upgraded to xenial on 2016-07-30 (242 days ago)

Vasya Pupkin (shadowlmd) wrote :
Vasya Pupkin (shadowlmd) on 2017-03-30
description: updated

apt-get install lighttpd php7.0-cgi
sudo lighttpd-enable-mod fastcgi fastcgi-php

Repro php leaking ~50MB each time at /var/www/html/index.php
<?php
   header("Content-Type: text/plain");
   ini_set('max_execution_time', 3);
   for ($i = 0; $i<100; $i++){
      $a[$i] = array_fill(0, 16384, '1234567890-foobar');
   }
   echo "Leaking " . memory_get_usage() . "\n";
   #busy wait until killed, and consume execution time (so no sleep)
   $st_tm = time();
   $diff=0;
   while (1){
     if ((time() - $st_tm) > $diff) {
             $diff=(time() - $st_tm);
             echo "Waiting to Die " . date('h:i:s') . "\n";
             flush();
     }
   }
?>

Track consumption and trigger it 5 times:
$ apt install smem
$ smem | grep www
$ for i in $(seq 1 5); do wget http://127.0.0.1/index.php; done
$ smem | grep www

Pre:
19338 www-data /usr/bin/php-cgi 0 44 969 5164
19339 www-data /usr/bin/php-cgi 0 44 969 5164
19340 www-data /usr/bin/php-cgi 0 44 969 5164
19341 www-data /usr/bin/php-cgi 0 44 969 5164
19336 www-data /usr/sbin/lighttpd -f /etc/ 0 1244 1309 2400
19337 www-data /usr/bin/php-cgi 0 16548 17772 23392

Post:
19336 www-data /usr/sbin/lighttpd -f /etc/ 0 1544 1601 2764
19337 www-data /usr/bin/php-cgi 0 15564 17113 23216
19339 www-data /usr/bin/php-cgi 0 40432 41522 47184
19340 www-data /usr/bin/php-cgi 0 40432 41522 47184
19341 www-data /usr/bin/php-cgi 0 40432 41522 47184
19338 www-data /usr/bin/php-cgi 0 40456 41886 47872

Ok, that is a rise, still the same processes.
Lets speed that up a bit
- modify wget loop to be async:
for i in $(seq 1 100); do (wget http://127.0.0.1/index.php &); done
And run a few of them

Post:
19336 www-data /usr/sbin/lighttpd -f /etc/ 0 1908 1965 3128
19337 www-data /usr/bin/php-cgi 0 13204 14371 19800
19339 www-data /usr/bin/php-cgi 0 53884 54746 59540
19340 www-data /usr/bin/php-cgi 0 53900 54813 59752
19338 www-data /usr/bin/php-cgi 0 53900 54985 60268
19341 www-data /usr/bin/php-cgi 0 53888 55006 60324

No matter what I do it doesn't rise over ~60MB per worker thread.
I'd assume that is some smart caching/heap-reuse in place.
The same is True for Ubuntu 16.04 and 17.04 so no new fix or such - just always as that.

Is your case exceeding the system to crash at some point?
How many cgi-bin processes do you have and what memory do they consume?

Changed in php7.0 (Ubuntu):
status: New → Incomplete
Vasya Pupkin (shadowlmd) wrote :

Why didn't you try my example instead? I believe it doesn't free memory used by objects or something like that. I noticed an issue when php processes ate up all free memory on server, being about 500 MB each and growing.

Vasya Pupkin (shadowlmd) wrote :

And yes, the system went a bit unresponsive at this point with 100% swap usage.

Vasya Pupkin (shadowlmd) wrote :

> How many cgi-bin processes do you have and what memory do they consume?

8 php-cgi processes: 2 backend, 3 workers per backend. And here's the memory usage after I set max_execution_time to 600:

 4567 www-data /usr/bin/php-cgi 0 48 1587 6496
 4569 www-data /usr/bin/php-cgi 0 48 1587 6496
 4571 www-data /usr/bin/php-cgi 0 48 1587 6496
 4568 www-data /usr/bin/php-cgi 0 4848 6760 14488
 4572 www-data /usr/bin/php-cgi 0 5488 7042 13996
 4570 www-data /usr/bin/php-cgi 0 9012 10894 18592
 4565 www-data /usr/bin/php-cgi 0 8988 11695 21976
 4566 www-data /usr/bin/php-cgi 0 10368 12776 21860

No matter how many times I run the script, memory usage of single process never gets above 30 MB. If I let php kill script because of max_execution_time, it leaks 30-100 MB and never frees it up.

I didn't try your example as "Just put many big JPGs to images directory." was too unsure to reproduce for each trying to help on this bug the same way.

Following your comment #3 that in the faulty case you had up to 500M per processes the limit that I found seems to get exceeded, now we need to find a way to reproduce that with less uncertainties.

Lets assume it is down to php objects or even something imagemagick specific, the critical part which we need to integrate in my test then is:
                $image = new \Imagick(realpath(__BASE__.'/'.$photo));
                $image->thumbnailImage(64, 48, true, false);
                $image->writeImage($thumb);
                $image->clear;
                $image = null;

I had no time today, but if you could check if by modifying my case you could get one that triggers still that would be great.
Maybe one can use Imagick::newImage instead of files create a new single color image, create a thumbnail and then go to be killed.

I'll subscribe myself to see if I find some time next days

Vasya Pupkin (shadowlmd) wrote :

I tried with this script:

<?php

$image = new \Imagick();
$image->newImage(4096, 4096, new ImagickPixel('red'), 'jpg');
while (1) {
    sleep(1);
}

?>

It's even worse, because script never gets killed, php-cgi process becomes unresponsive and can only be killed with SIGKILL.

"Never" is too much of an extreme, it does not consume execution time and is thereby not killed by the max_execution_time limit as I outlined before.
But that is works-as-designed and not a bug, at some point later on the request itself will timeout - I don't know what the limits are thou.

tags: added: server-next

Also thanks Vasya for your active participation!

# mem check with
smem | grep www
# driving some requests via
for j in $(seq 1 10); do for i in $(seq 1 100); do (wget http://10.0.4.156/index.php &); done; sleep 11s; done

# getting the php imagick dependencies
$ apt-get install php-imagick
$ sudo service lighttpd restart

And I modified the test code according to our discussion to:
<?php
   header("Content-Type: text/plain");
   ini_set('max_execution_time', 10);
   $image = new \Imagick();
   $image->newImage(8192, 8192, new ImagickPixel('red'), 'jpg');
   echo "Leaking " . memory_get_usage() . "\n";
   #busy wait until killed, and consume execution time (so no sleep)
   $st_tm = time();
   $diff=0;
   while (1){
     if ((time() - $st_tm) > $diff) {
             $diff=(time() - $st_tm);
             echo "Waiting to Die " . date('h:i:s') . "\n";
             flush();
     }
   }
?>

But that is only leaking like 350k each time and never grows above.
But while doing so I think I have found that the processes stay at the size they get.

In my former examples I leaked ~50-60M and that was the size.
Here the size stayed at some hundred KB which matches the Leak.

OTOH No matter how much requests I have thrown against it it never behaved like a leak to slowly add up.
So maybe it is not a "leak" which adds up over time, but instead just excessive memory needs?.
Id expect some other max barrier to kick in, but lets check this by verifying this next.

If I try to get "too much" via the var alloc I hit:
"Allowed memory size of 134217728 bytes exhausted (tried to allocate 528384 bytes)"

That matches the default conf of:
memory_limit = 128M

If I do so via the Imagick on Heap I get an exception at some point into the loop:
   for ($i = 0; $i<100; $i++){
     $image[$i] = new \Imagick();
     $image[$i]->newImage(8192, 8192, new ImagickPixel('red'), 'jpg');
   }
Throws: "ImagickException: Unable to create new image"
That seems to be out of mem as well, as I can avoid it with lower counts or lower image sizes.

So not leaking but also not getting over its max amount?

Ok, I need some random images to get closer to your original case now ...

Along the way @Vasya - what is your php memory-limit set to for this - still the default 128M?

Vasya Pupkin (shadowlmd) wrote :

Yep, it's at 128M. But it's per-script limit so php process can eat more as long as it gets new scripts to execute, I believe.

Also tried your example, no leaks for me here. Very interesting.

Vasya Pupkin (shadowlmd) wrote :

This example, however, works perfect:

<?php
    header("Content-Type: text/plain");
    ini_set('max_execution_time', 10);
    $st_tm = time();
    $diff = 0;
    while (1){
        $image = new \Imagick(realpath('./IMG_2466.JPG'));
        $image->thumbnailImage(64, 48, true, false);
        $image->clear;
        $image = null;
        if ((time() - $st_tm) > $diff) {
            $diff = (time() - $st_tm);
            echo "Waiting to Die " . date('h:i:s') . "\n";
            flush();
        }
    }
?>

So you only need a single big image (I used 11 MB jpeg). Doesn't work with newImage probabaly because thumbnailImage takes no time to complete and we need script killed while thumbnailImage is still working.

As I said I was trying to get closer to your case.

# get some pictures
$ apt-get install ubuntu-wallpapers-zesty ubuntu-wallpapers-karmic ubuntu-wallpapers-lucid ubuntu-wallpapers-maverick ubuntu-wallpapers-natty ubuntu-wallpapers-oneiric ubuntu-wallpapers-precise ubuntu-wallpapers-quantal ubuntu-wallpapers-raring ubuntu-wallpapers-saucy ubuntu-wallpapers-trusty ubuntu-wallpapers-utopic ubuntu-wallpapers-vivid ubuntu-wallpapers-wily
$ mkdir /var/www/html/thumbs
$ chgrp www-data /var/www/html/thumbs
$ chmod g+w /var/www/html/thumbs

To process those in my case needed ~16 seconds.
So I duplicated the pictures 3 times

$ cd /usr/share/backgrounds
$ for i in *; do cp $i dup_1_$i; cp $i dup_2_$i; cp $i dup_3_$i; done

The code now is:
<?php
   header("Content-Type: text/plain");
   ini_set('max_execution_time', 30);

   $photos = [];
   if ($album_root = opendir("/usr/share/backgrounds")) {
     while (false !== ($entry = readdir($album_root))) {
       if (preg_match('/\.(jpe?g|gif|png)$/i', $entry)) {
         $photos[] = $entry;
       }
     }
     closedir($album_root);
     sort($photos);
     foreach($photos as $photo) {
       echo "Processing " . $photo . " at " . date('h:i:s') . "\n";
       $thumb = 'thumbs/'.$photo;
       if (!file_exists($thumb)) {
         $image = new \Imagick(realpath('/usr/share/backgrounds/'.$photo));
         $image->thumbnailImage(64, 48, true, false);
         $image->writeImage($thumb);
         $image->clear;
         $image = null;
       }
     }
   }

   echo "Leaking " . memory_get_usage() . "\n";
   #busy wait until killed, and consume execution time (so no sleep)
   $st_tm = time();
   $diff=0;
   while (1){
     if ((time() - $st_tm) > $diff) {
             $diff=(time() - $st_tm);
             echo "Waiting to Die " . date('h:i:s') . "\n";
             flush();
     }
   }
?>

With that it exceeds the 30 seconds just as it does in your case (needs ~40).
While running I see it changing memroy consumption between 80 and 120 MB.
But never exceeding as if some garbagde collection works as expected.

Many concurrent requests got my system stuttering via cpu consumption but nothing exceeded any memory limit.
So killed by max_execution_time while doing the same workload you have it never exceeded the 128M I had set.

Vasya Pupkin (shadowlmd) wrote :

Can you please try my example above with single big enough image?

Vasya Pupkin (shadowlmd) wrote :

Here's what I get. Memory usage before running tests:

11147 www-data /usr/bin/php-cgi 0 44 1606 6692
11148 www-data /usr/bin/php-cgi 0 44 1606 6692
11149 www-data /usr/bin/php-cgi 0 44 1606 6692
11151 www-data /usr/bin/php-cgi 0 44 2190 6688
11153 www-data /usr/bin/php-cgi 0 2572 4512 11060
11152 www-data /usr/bin/php-cgi 0 2592 4831 11732
11150 www-data /usr/bin/php-cgi 0 8308 15370 30124
11146 www-data /usr/bin/php-cgi 0 9324 15434 30404

Memory usage after ~15 iterations:

11147 www-data /usr/bin/php-cgi 0 44 1598 6624
11148 www-data /usr/bin/php-cgi 0 44 1598 6624
11149 www-data /usr/bin/php-cgi 0 44 1598 6624
11146 www-data /usr/bin/php-cgi 0 9316 13875 30336
11150 www-data /usr/bin/php-cgi 0 10608 14890 30056
11153 www-data /usr/bin/php-cgi 0 98576 101490 113700
11151 www-data /usr/bin/php-cgi 0 139952 143054 155600
11152 www-data /usr/bin/php-cgi 0 233152 236259 248880

Memory usage after ~30 iterations:

11147 www-data /usr/bin/php-cgi 4828 28 342 1300
11148 www-data /usr/bin/php-cgi 4812 44 358 1316
11149 www-data /usr/bin/php-cgi 4812 44 358 1316
11146 www-data /usr/bin/php-cgi 4840 8816 9130 10096
11150 www-data /usr/bin/php-cgi 16 10368 11616 14972
11151 www-data /usr/bin/php-cgi 5560 180348 181588 185108
11153 www-data /usr/bin/php-cgi 38944 232548 233525 236440
11152 www-data /usr/bin/php-cgi 39752 336436 337600 341208

Vasya Pupkin (shadowlmd) wrote :

You can use this image: http://home.bsrealm.net/IMG_2466.JPG

The only thing left that came to my mind would be a single super-huge image.
char="x"; sz=8192; convert -size ${sz}x${sz} xc:white ${char}.png; ll -h ${char}.png
char="y"; sz=16384; convert -size ${sz}x${sz} xc:white ${char}.png; ll -h ${char}.png

It might be interesting that on a newer ImageMagick version (8:6.9.7.4+dfsg-2ubuntu3) it refused to create an image of that size on the commandline.

And I think that did trigger what we search at least to some extend.
I got a /usr/bin/php-cgi process that consumed 4G memory and 800M swap while the 128M limit was active.

Some more tests showed that the 8129x8129 image works normally.
But only the 16384x16384 is the one bending the limits.

Of the usual process structure it was one of the worker threads that stayed active that way
/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf
  \_ /usr/bin/php-cgi
      \_ /usr/bin/php-cgi <=== X
      \_ /usr/bin/php-cgi
      \_ /usr/bin/php-cgi
      \_ /usr/bin/php-cgi

Killing by max_execution_time only means stopping to process, not killing and re-forking the php-cgi process.
The real bug IMHO here is that the combo we have found allows php to exceed its memory limits.

That was done on Xenial
On Zesty I run into an issue loading the image.
Uncaught ImagickException: Invalid IHDR data `/usr/share/backgrounds/y.png' @ error/png.c/MagickPNGErrorHandler/1630 in /var/www/html/index.php:18

Seems to be similar to http://www.imagemagick.org/discourse-server/viewtopic.php?t=31552
And a check on too large images applies here as well.

So for now lets assume a newer ImageMagick will refuse to load it as I see it here, but an older one works.
The "problem" as I said is that it can exceed the php-cgi allocation anyway.

I'll try to summarize a howto trigger in the next post as I'd tihnk we have to report that to upstream.

Vasya Pupkin (shadowlmd) wrote :

I'm not sure you ran into something I am reporting. Looks like you are missing my comments or something. Here's a summary:

1. When script finishes correctly, memory frees up completely.
2. There is no problem with my image.
3. Looks like memory leaks only when script is killed while thumbnailImage method is doing something.
4. Example in comment #12 is the best way to test it with real image in comment #16.

Vasya Pupkin (shadowlmd) wrote :

And yes, Imagick easily allocates far more memory than memory-limit allows. And memory_get_usage() doesn't report this memory. Probably that is the reason why php can't free this memory when it kills the script. It's absolutely up to Imagick class to free up memory and if php kills the script before it happens, memory gets leaked.

For the time being - until clarified otherwise - I think this could be used to DOS a server so I'll mark it security.

information type: Public → Public Security

On the ImageMagick Side there seems to be
  /etc/ImageMagick-6/policy.xml
Which has:
  <policy domain="resource" name="width" value="16KP"/>
That is not the case on Xenial where the rule seems to have no effect (older version).
But all that is only masking the real problem, which is why php-cgi is allowed to grow out of bounds.

According to [1] I should be able to set that limit for cli scripts as well.
If possible that would take a lot of things (lighthttpd/cgi) out of the equation so lets try this.

The minimal repro IMHO is this, which makes the problem fairly easy to be discussed.
This is more or less my prep on a report to PHP.

# Prep image
$ convert -size 16384x16384 xc:white test.png

# Ensure with a Test that the Limit is in place
$ php -d memory_limit=2M -r 'for ($i = 0; $i<100; $i++){ $a[$i] = array_fill(0, 16384, '1234567890-foobar'); }'
PHP Fatal error: Allowed memory size of 2097152 bytes exhausted (tried to allocate 528384 bytes) in Command line code on line 1

# Run the example exceeding the Limit
$ /usr/bin/time -v php -d memory_limit=2M -r '$image = new \Imagick("y.png");'
[...]
Maximum resident set size (kbytes): 5047096

Note: on more recent versions of Imagemagick you might have to lift/disable some default limits in /etc/ImageMagick-6/policy.xml

Confirmed on Xenial and Zesty with php v7.0.15

I'm note sure on the security/limits model into plugins like ImageMagick here, but want at least to report it. Searching for it only gives me zillion of pages showing me how to raise the limits :-/
I'm fine getting told that it is a bug in the plugin if that really is the case.

[1]: http://php.net/manual/ini.core.php#ini.memory-limit

Changed in php7.0 (Ubuntu):
status: Incomplete → Confirmed
Vasya Pupkin (shadowlmd) wrote :

Why are you trying to create image despite I provided a real image that can be used instead? Also, I don't see how can it be tested with php-cli, because php-cli dies along with the script. Can you please read comments #19 and #20?

I am concerned because I am not reporting that a script can use more memory than allowed by memory_limit, I am reporting a memory leak. It's probably related but still. If they fix it so Imagick will not be able to go beyond memory_limit, but will still leak memory when killed during thumbnailImage method is doing something, it will not be any better.

Vasya Pupkin (shadowlmd) wrote :

Also your test is not correct because it's not about how big is the image, but how long it takes for thumbnailImage method to process it. The idea is to get php into killing a script while thumbnailImage is processing image. On a relatively slow computer it may happen even with a lot smaller images.

Just when I wanted to report to PHP I found [1] which also led me to [2].

With that I understood that ImageMagick does not follow the php limit being a plugin.
Instead it has its own structure of Limits.

That made me check the default limit that are in place
php7.0 -r 'echo Imagick::getResourceLimit( Imagick::RESOURCETYPE_MEMORY)/1024/1024 . "\n";'
7756.0078125

So 8G is the limit, ok I see we are below that with "just" 5G.

If I want to limit Imagick to something more Sane, lets say 1G I can uncomment examples in /etc/ImageMagick-6/policy.xml and set them like:
  <policy domain="resource" name="memory" value="1GiB"/>
  <policy domain="resource" name="map" value="1GiB"/>

BTW - even with the limit applied the functionality did still work.

So I was turning that new knowledge back to our webserver example.
I had to remember that I have to restart it so that php-cgi picks up the new config, but not it stays inside lower limits just fine.

The newer Release like Zesty comes with much safer defaults:
  <policy domain="resource" name="memory" value="256MiB"/>
  <policy domain="resource" name="map" value="512MiB"/>
  <policy domain="resource" name="width" value="16KP"/>
  <policy domain="resource" name="height" value="16KP"/>
  <policy domain="resource" name="area" value="128MB"/>
  <policy domain="resource" name="disk" value="1GiB"/>
While Xenial still has none in the XML which makes them the default to the high values we have seen.
Unfortunately this is next to impossible to SRU [3] back as some environments might rely on the bigger limits and regress by doing so.

That said I think we have it complete now:
1. understood the issue
2. adressed in the latest release (saner/lower defaults)
3. not SRUable to older releases

I hope that helps you to make your setup work as you expect it.
You were were active on your report and I want to thank you once more - officially this is not "invalid" as it is not a "bug" in the common sense. If you think it is please re-set to new and discuss.

I wonder if there is a good place to announce this as potential DOS to let people check their configuration. I'll ping a few people and ask them.

[1]: https://bugs.php.net/bug.php?id=59031
[2]: http://www.imagemagick.org/script/resources.php
[3]: https://wiki.ubuntu.com/StableReleaseUpdates

Changed in php7.0 (Ubuntu):
status: Confirmed → Opinion
tags: removed: server-next
Vasya Pupkin (shadowlmd) wrote :

Seriously. Can you test and see that regardless of what limits are, there is a memory leak issue and I already stated many times how to trigger it and what exactly is the cause?

Joshua Powers (powersj) wrote :

@shadowlmd, after reviewing paelzer's comment #26 it is very clear that he made a significant effort to help resolve your report. Looking at paelzer's explanation, he reproduced the root issue in a variety of ways, investigated if this was a known upstream issue, or if known upstream work around existed. In this case this situation should be avoided and corrected via a configurable option.

For this specific issue I also agree that there is no action to take at this time. Later releases have a lower configuration set and changing the behavior in past releases is not an acceptable change for an SRU.

Vasya Pupkin (shadowlmd) wrote :

I appreciate your help but you are missing the point. Christian was so concerned about script going beyond memory_limit so he just discarded everything else I researched about the memory leak issue I was reporting in the first place. The memory leak issue is still there. It is still an issue and it still has to be fixed.

Yes, script can go beyond memory_limit because it's a plugin that utilizes it's own memory manager.
But when php kills a script, it should free all memory, and this is NOT HAPPENING under certain circumstances.

Vasya - sorry to unintentionally ignore your posts in between.
I worked on this straight this morning on without refreshing the page in between - so I just didn't see them at all.
I was made aware now - I really beg your pardon!

At least out of my debug I can now easily create a test pic which is perfect to cause this.

The TL;DR of what I was missing is that it is not about consuming a lot (or not) but much more about why it does not free it up in the case of the exceed of execution time.

I'll look into that tomorrow and refresh the page very often in between.

Seth Arnold (seth-arnold) wrote :

Realistically the only way that a persistent php 'controller' process can avoid this is to fork() a new process for every script execution that it can kill via OS controls afterwards, or better yet just use the OS-provided rlimits to control memory use, CPU use, etc.

So long as it uses a single process to run multiple scripts it simply can't provide any realistic controls over what non-interpreted code does.

There may not be a happy solution here.

Thanks

Vasya Pupkin (shadowlmd) wrote :

Yep. But at least this particular bug can and should be fixed. I'm pretty much sure that Imagick has some shutdown handler that frees allocated memory, and it works when script is killed after newImage created a huge image, for example. Memory is only leaked when script is killed while thumbnailImage is processing image, so it probably allocates memory using pure malloc, so Imagick's memory manager and/or handler doesn't know anything about it.

In other words, I am sure it is a bug in Imagick, not in php itself.

Changed in php7.0 (Ubuntu):
status: Opinion → Confirmed

After better understanding I was easily able to confirm the actual issue.
Sorry once more for the useless noise Vasya.

When getting to the end of execution normally I see small mem fottprint:
3392 www-data /usr/bin/php-cgi 4516 2884 6581 12460

When killed by "Maximum execution time" the big foot print stays:
3504 www-data /usr/bin/php-cgi 783068 4137636 4140345 4147292

I'm not sure if that was clear before, I also found that this happens only if the process is killed while "IN" the Imagick constructor. As soon as it leaves the constructor the memory is freed and thereby not left around later.

I confirmed that up to the latest release being Zesty with php 7.0.15-1ubuntu4.

The next I tried was checking newer versions from [1].
There currently is the more recent combination:
- php-imagick 3.4.3~rc2-2
- php* 7.1.3+-3

It is interesting that with those the error is not triggering.

But while in the past the kill was like this:
  (mod_fastcgi.c.2543) FastCGI-stderr: PHP Fatal error: Maximum execution time of 60 seconds exceeded in /var/www/html/index.php on line 30

It now is like this:
  (mod_fastcgi.c.2424) unexpected end-of-file (perhaps the fastcgi process died): pid: 11582 socket: unix:/var/run/lighttpd/php.socket-0
  (mod_fastcgi.c.3175) response not received, request sent: 824 on socket: unix:/var/run/lighttpd/php.socket-0 for /index.php?, closing connection

=> With that the php-cgi process really dies, the pid is gone and php-cgi respawns a new one - thereby naturally the huge allocation is gone.

OTOH If I let run the process longer and let it be killed by max-execution time while back in my busy loop php code it is killed normally by a message like
  (mod_fastcgi.c.2543) FastCGI-stderr: PHP Fatal error: Maximum execution time of 60 seconds exceeded in /var/www/html/index.php on line 31

That said the newer versions seem already seem have a change in their handling that avoids the issue. When execution is in a plugin and the timeout occurs it seems that the whole process is killed.
I have no idea if that was meant for cleanup like we need it here or for something else. But it at least makes future versions not affected as soon as we jump onto these.

As discussed with my coworker who does more on php&imagick and since I'm soon out for a while I pass this over to Nish.

[1]: https://launchpad.net/~ondrej/+archive/ubuntu/php

tags: added: server-next
Changed in php7.0 (Ubuntu):
assignee: nobody → Nish Aravamudan (nacc)

For Nish the minumum is really only:

# create test pic
char="y"; sz=16384; convert -size ${sz}x${sz} xc:white ${char}.png; ll -h ${char}.png

# Load and kill while that
<?php
    header("Content-Type: text/plain");
    ini_set('max_execution_time', 10);
    $image = new \Imagick(realpath('/usr/share/backgrounds/y.png'));
?>

I tested a 7.0.17 as requested but 7.0.17-2+deb.sury.org~xenial+1 did not change it.

Nish Aravamudan (nacc) wrote :

Vasya,

Given Christian's great testing, this might be something we need to inform upstream PHP about. Could you file a bug? It seems like it has been fixed/worked around in 7.1.x, but not in 7.0.x -- could be something they backport if we notify them it's an issue.

-Nish

Changed in php7.0 (Ubuntu):
importance: Undecided → High
Nish Aravamudan (nacc) wrote :

Upstream considers (as of right now) this to be a bug in imagick (php-imagick, I believe). I'm marking it as such and we'll see what resolution occurs from upstream.

Changed in php7.0 (Ubuntu):
status: Confirmed → Invalid
assignee: Nish Aravamudan (nacc) → nobody
Changed in php-imagick (Ubuntu):
status: New → Confirmed
tags: removed: server-next
To post a comment you must log in.
This report contains Public Security information  Edit
Everyone can see this security related information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.