enblend fails to blend large pano

Reported by rew on 2010-12-04
68
This bug affects 10 people
Affects Status Importance Assigned to Milestone
Enblend
Medium
Unassigned

Bug Description

Enblend failed with:

enblend --compression=LZW -m 1200 -w -f135659x3947+0+1091 -o pano8110_l.tif ....
...
enblend: info: loading next image: pano8110_l0000.tif 1/1
enblend: out of memory
enblend: std::bad_alloc

This is a simple 0.5Gpixel panorama I shot. And agreed, Hugin did warn me that it might take a lot of memory.

The thing is: There is no other tool to stitch this with, so I'll have to make do with hugin and its toolset....
I thought there was an "imagecache" that would swap parts of images to disk...

rew (r-e-wolff) on 2010-12-04
tags: added: malloc
removed: alloc
rew (r-e-wolff) wrote :

It is now blending the 57th image on my 64-bit machine. It has 1Gb of memory allocated. This should have been entirely possible on my 32-bit-machine too. It might be enblend 3.2 on the 64-bit machine, so then it would be a regression of 4.0 that it can't handle the larger panos anymore.

OK. Testing... Yes.... On my 32-bit machine the blend goes along much further with enblend 3.2 than with the newer enblend 4.0 ....

rew (r-e-wolff) wrote :

Enblend version 3.2 ran out of memory after 65 images.

It seems that it's the "imagecache" that should be "on" during the build to make it start the big blend.

rew (r-e-wolff) wrote :

OK. I managed to stitch this project. The only way that works is: enblend 4.0 with no image cache on a 64-bit machine. Takes huge amounts of memory, but it works.

Changed in hugin:
status: New → Confirmed
rew (r-e-wolff) wrote :

We need to fix "imagecache" .

Is the imagecache also used in hugin? Then it might crash hugin as well!
For now: "affects enblend"

affects: hugin → enblend
rew (r-e-wolff) wrote :

Just found that this is the "master" bug, so marked the other one bug #685903 as duplicate. My source images are downloadable:

http://prive.bitwizard.nl/dsc_3210-dsc_3254_exposure_layers_0018.tif
http://prive.bitwizard.nl/dsc_3210-dsc_3254_exposure_layers_0019.tif
http://prive.bitwizard.nl/dsc_3210-dsc_3254_exposure_layers_0020.tif

It happens with the current version of enblend/enfuse, just pulled from HG, problem goes away if you disable imagecache.

Hugin thought up the commandline:
   enfuse -w -o dsc_3210-dsc_3254_stack_ldr_0005.tif dsc_3210-dsc_3254_exposure_layers_0018.tif dsc_3210-dsc_3254_exposure_layers_0019.tif dsc_3210-dsc_3254_exposure_layers_0020.tif
for me.

tags: added: imagecache
tags: added: enfuse
the_mechanical (mechanical) wrote :

Same problem as described above.

enblend: out of memory
enblend: std::bad_alloc
enblend: info: remove invalid output image "project.png"
make: *** [project.png] Fehler 1

enblend 4.1-5e7392eab8d3, compiled without Imagecache.
Ubuntu Natty, 64bit, 16GB Memory, about 4GB free on /, Project-folder about 45Gb free.

Memory usage got up to nearly 16GB (99,4%) and also about 100% SWAP usage (7GB) before enblend failed.

The suggested disabling of imagecache didn't help for me. Will try it out WITH imagecache.

the_mechanical (mechanical) wrote :

Still no output (enblend built with imagecache)

make -j 4 -f pano.pto.mk NONA='nona -t 1' ENBLEND='enblend -m 6000'
...
enblend: info: loading next image: project0027.tif 1/1
make: *** [project.png] Getötet
make: *** Datei »project.png« wird gelöscht

Project consists of 190 pics, each 12MPx.

On Tue, Apr 05, 2011 at 08:38:43PM -0000, the_mechanical wrote:
> Still no output (enblend built with imagecache)
>
> make -j 4 -f pano.pto.mk NONA='nona -t 1' ENBLEND='enblend -m 6000'
> ...
> enblend: info: loading next image: project0027.tif 1/1
> make: *** [project.png] Getötet
> make: *** Datei »project.png« wird gelöscht
>
> Project consists of 190 pics, each 12MPx.

Can you compress the data enough that I can try a blend on my machine?
I'm thinking scaling each image a factor of two, and then compressing
each with jpeg with a low quality setting. I'll blow them up again and
attempt the stitch. If that works here, I'll have to instruct you to
try to stitch the blown-up version.

I can provide an FTP account if you need one.

You tried running without imagecache and it didn't die "without
reason". the OS refused a memory allocation because it couldn't find
any more memory for you to use. What happens if you:

 dd of=swapfile if=/dev/zero bs=1M count=16k
 mkswap swapfile
 swapon swapfile

Maybe the stitch simply needs more than 24Gb of memory?

 Roger.

--
** <email address hidden> ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ

the_mechanical (mechanical) wrote :

Thanks for your reply.
Finally i did it (enblend did it ;-) ).

BEFORE:
* Due to 16GB Memory and SSD-disk i used a RAM-disk for /tmp.
* Due to SSD (limited space - the big ones are too expensive for me), only little swap-space on a "normal hard-drive".

AFTER (means solution which worked for me):
* No RAM-disk
* Made space free on "normal hard-drive" and took it as SWAP (now ~30GB SWAP).
* mounted a folder temp (on Project-partition) to \tmp, means /tmp is as big as the free part of the project partition
* compiled enblend without imagecache (as at the beginning).

So i got the result. Not correct, but that's another story *gg* (fine-tuning of control points necessary).
Output - png is about 900MB.

rew (r-e-wolff) wrote :

So: yes it blends when you try it without imagecache and have enough
swap space available. Enblend crashes without any reason according to
your first message. Just "enblend was killed" is output by make.
Probably "segmentation fault" is somewhere in there, but lost....

I personally stitch at one third or one tenth of the final resolution
first to check if I have my controlpoints right. In 99% of the cases
you can already see problems there. And the blends go 10 - 100x
faster.

kaefert (kaefert) wrote :

I'm affected by this bug too (at least that what it seems to me)
I'm using Ubuntu 13.04 and all packages are out of the default repositories.
I have 10GB ram and 20GB swap-space, but somehow the swap didn't get filled more than 10% when this error happened (which seems kind of strange to me)
Before I used LinuxMint14 (~Ubuntu 12.10) with 10GB of RAM and 10GB of swap and enblend made it much farer, and only crashed with this error around picture nr. 280 (there are 330 in total)

This is my output in the hugin logfile:

...
enblend: info: loading next image: name0226.tif 1/1
enblend: info: loading next image: name0227.tif 1/1

enblend: out of memory
enblend: std::bad_alloc
enblend: info: remove invalid output image "name.tif"
make: *** [name.tif] Error 1

So as I understand the workaround for this bug is to compile enblend yourself and set some option "imagecache" to false or something? Could someone guide me how to do this?

rew (r-e-wolff) wrote :

For you, I'd think it is easiest to do:

 apt-get source enblend
 apt-get build-deps enblend
 cd enblend-<version>
 dpkg-buildpackage

Now you've rebuilt your distro's version with those options. Now you have a source directory that you know compiles and you can tune to your liking.

kaefert (kaefert) wrote :

Okey, so this is what I came up with:

http://sourceforge.net/projects/enblend/files/enblend-enfuse/enblend-enfuse-4.1/
http://sourceforge.net/projects/enblend/files/enblend-enfuse/enblend-enfuse-4.1/enblend-enfuse-4.1.1.tar.gz/download
<-- download, extract, and open terminal in extracted folder, then run this:

sudo apt-get install build-essential libtiff5-dev liblcms2-dev libvigraimpex-dev libboost-dev libboost-system-dev libgsl0-dev help2man
./configure --enable-image-cache=no
make
sudo make install

I will tell you if the resulting enblend works better than the one from the ubuntu repo in a few days ;)

kaefert (kaefert) wrote :

oh, damn, didn't see your response. I thought I would get subscribed to the bug automatically when commenting on it and pressing the "affects me too" button, but apparently not (any more?)..
Where would that source directory land if I used the code you posted?

rew (r-e-wolff) wrote :

Actually, I usually do something like:

    mkdir enblend
    cd enblend
    apt-get source ....

because the apt-get creates a bit of a mess. Anyway, no it's not in a system directory somewhere, but under your current directory.

kaefert (kaefert) wrote :

Okey, so I'm home again. My self-built enblend 4.1.1 failed really fast, again with the same error:

enblend: out of memory
enblend: std::bad_alloc

But my dstat output of that time shows that the memory was nearly empty...
http://pastebin.com/raw.php?i=B9sjQaK9

So I'm gonna try the compiling your way now, will report back when there's news.

kaefert (kaefert) wrote :

btw. --> is there a way to skip the nona part and directly call enblend only?
I found this name.pto.mk file and thought it might be a shellscript, but running it like a shell script didn't work out that well.

rew (r-e-wolff) wrote :

It's a makefile.
In Hugin you can select the button: "Keep intermediate files". Then: enblend yourprojectname00*.tif -o tourprojectname.tif should be the enblend commandline.

kaefert (kaefert) wrote :

Okey, thanks, will use with me continued tests, will make it quicker, since the nano stuff also takes around an hour every time..

So when I compile enblend (both the version 4.1.1 downloaded from sourceforge and the version 4.0 from the ubuntu repos) with the option --enable-image-cache=no it fails really fast (with the same memory error) - even before it prints the first
"enblend: info: loading next image:" line. Memory Usage doesn't spike when this happens, so no idea how he manages to produce an out of memory exception in this case.

Now I compiled 4.1.1 with the default options. This failed after 7 1/2 hours (the last nano tiff got a timestamp of 0:44, crash happened at 8:15)

And with this the memory usage does spike, but the SWAP space still was only half filled (9,7GB filled out of 19,94GiB available)
Here's the story with commented dstat output:
http://pastebin.com/raw.php?i=7AATfsXz

rew (r-e-wolff) wrote :

Is your "userspace" 32-bit or 64-bit?

A 32-bit userspace program will run out of addressable memory at 3G. So that's unlikely to exhaust your swap. (in fact it's unlikely to start filling your swap). But just to be sure....

To allow developers to work with your images, are you willing to upload them somewhere? You can reduce the source images in resolution to say 1/2 of 1/4th of their original resolution, and save them as low-quality JPGs. Then even 330 images will end up at a resonable size if you pack them with the hugin project file in a zip.

To reduce the images in size:
  mkdir small
  for i in *.jpg ; do
     djpeg $i | pnmscale 0.25 | cjpeg -quality 70 > small/$i
  done
I reduced a set of pictures a factor of 64 this way. your set of 330 images should reduce from 1-2 Gb to 15-30Mb. Easily transferable via mytransfer or something like that (you know how to use pastebin). I can put it up on a webserver somewhere if you want/allow me.

kaefert (kaefert) wrote :

I think I should be running 64bit

kaefert@ultrablech:~$ uname -a
Linux ultrablech 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

I just found I was telling you wrong stuff, my total picture count of this set is not 330 but 233 (so actually I was damn close to finishing the panorama this one time where he failed after loading picture nr. 227)

All 233 jpgs together have a size of 464,6MB - If you want I'll put them on my webserver like that or if downloading for a few hours is too much I can use your reducing script first.

This is my detailed enblend version info of the precompiled package from my distro (ubuntu 13.04)
http://pastebin.com/SLMtrHHy
This one worked "best" so far, meaning it gave me the most
enblend: info: loading next image: name0number.tif 1/1
lines before failing.

rew (r-e-wolff) wrote :

Downloading for hours? Haven't seen that happen for quite a while. Would probably download in one minute here... :-)
(download speeds of 11 megabytes per second have been known to occur... :-) At those speeds, it mostly depends on the server.... ) ("Your upgrade may take a few hours. Please be patient while we download the archives. Downloading will take about 20 seconds with your connection.").

kaefert (kaefert) wrote :

well, I'm not at home, therefore got a painfully slow connection here, but you can download the 15MB reduced files here:
http://kaefert.is-a-geek.org/misc/hugin/360degree233pics_enblend-failure/small/

The first few of the original files are in the parent directory, but the upload from here to my NAS at home runs with a speed of 23kb/s - so will be finished in 5 hours... (It would probably be faster to driver home, transfer it via my wifi at home and drive back here, but that would cost a lot of gas and therefore money ;) )

rew (r-e-wolff) wrote :

You scaled by 0.25, Correct? Then I can scale them back up to normal size. Which, under most circumstances is very similar to using the original files.

I also need the hugin project file....

kaefert (kaefert) wrote :

yep, correct (I used exactly your code, the only change I made was from *.jpg to *.JPG)
I've put the project file and a few logfiles in the subdirectory "stuff"
I've turned on directory listing for this directory, so you can just navigate there.

kaefert (kaefert) wrote :

What is the maximum size allowed for tiff images? Are there other output formats possible? (that I can edit with gimp afterwards?)

When calling enblend like this:
enblend --verbose=99 -x --output=output.tif projectname*.tif

I get this output after a minute:

enblend: info: output image size: [(4711, 1044) to (278706, 9570) = (273995x8526)]
enblend: info: loading next image: projectname0000.tif 1/1
enblend: info: assembled images bounding box: [(0, 1707) to (3135, 5788) = (3135x4081)]
enblend: error: Maximum TIFF file size exceeded

enblend: an exception occured
enblend:
Postcondition violation!
exportImage(): Unable to write TIFF data.

enblend: info: remove invalid output image "output.tif"

Or does the -x parameter do something else then I expected it to do?

Jeff Hungerford (hungerf3) wrote :

Depending on the version of the TIFF format which is used, the limit is either 4GB, or 18,000 PB.

If you are seeing that error, either the TIFF library you linked against doesn't support bigTIFF, or for some reason it decided to only use the old format.

rew (r-e-wolff) wrote :

Your image is 2.3Gpixel. At 3 bytes per pixel, that would come to 7Gbyte, but it's probably 4 bytes per pixel because it has an alpha channel.
If you enable compression, chances are you might get lucky with an imagesize slighly below the 4G limit.

kaefert (kaefert) wrote :

How can I make enblend use the right TIFF library that will allow TIFF images bigger than 4GB? Or how can I make enblend tell the tiff library to use this BigTIFF format?

I've added a compressed logfile of my enblend error with verbose output to my "stuff" folder -->
http://kaefert.is-a-geek.org/misc/hugin/360degree233pics_enblend-failure/stuff/long-run_verbose-output.txt.7z

The last debug lines before the memory error where:

enblend: info: loading next image: projectname0226.tif 1/1
enblend: info: assembled images bounding box: [(263273, 3602) to (266269, 7581) = (2996x3979)]
enblend: info: image cache statistics after loading white image
enblend: info: blackImage 0x14072f0: cache misses=0 blocks allocated=0 blocks required=4263
enblend: info: blackAlpha 0x1406ae0: cache misses=0 blocks allocated=0 blocks required=2132
enblend: info: whiteImage 0x1468c80: cache misses=15778 blocks allocated=0 blocks required=4263
enblend: info: whiteAlpha 0x1469fa0: cache misses=17569 blocks allocated=512 blocks required=2132
enblend: info: summary: cache misses=33347 blocks managed=512 allocated=512 free=0
enblend: info: image union bounding box: [(0, 0) to (266269, 8442) = (266269x8442)]
enblend: info: image intersection bounding box: [(263273, 3602) to (263952, 7581) = (679x3979)]
enblend: info: estimated space required for mask generation: 20231MB
enblend: info: creating blend mask: 1/3 2/3 3/3

infering from the output after loading picture nr. 255 and before that, the next lines should have been:

enblend: info: creating blend mask: 1/3 2/3 3/3
enblend: info: optimizing 1 distinct seam
enblend: info: strategy 1, s0:
enblend: info: t = 1.085e+05, eta = 6, k_max = 32, 157 of 296 points converged
enblend: info: t = 8.135e+04, eta = 6, k_max = 32, 157 of 296 points converged
enblend: info: t = 6.101e+04, eta = 6, k_max = 32, 157 of 296 points converged
...

Maybe some hugin / enblend developer can make some sense out of this information..

Christoph Spiel (cspiel) wrote :

        OK -- a few words from your chief
maintainer of Enblend/Enfuse.

1. ImageCache

        Any ImageCache support has been removed
from the development branches of Enblend and
Enfuse. It won't come back unless some brave,
brilliant, ... hacker steps up and takes
responsibility for the code. Thus, I consider
comments on whether to use or to avoid the
ImageCache as futile. Virtually, only the
non-ImageCache (inherently fast and easily
parallelizable) version exists.

2. Image-Size Limitations of Enblend and Enfuse

        In brief: there are several. Some
restrictions exist because we are a bit too
generous in our use of memory, others are
enforced upon us by third-party libraries, most
importantly Vigra.

Rule-of-Thumb: Neither Enblend no Enfuse can
    handle any image larger than 2**32 - 1
    pixels. For geeks this is INT_MAX or
    std::numeric_limits<int>::max().

  * The type of image (8-bit, 16-bit, b&w,
    color, w/alpha, etc.) does not matter here,
    only the pixel count.

  * The limit applies to any of the input
    images, all internally-held, transient
    images, and of course the output image. As
    a programmer would put it: Vigra sometimes
    substitutes type `int' for type `ptrdiff_t'
    and that bleeds through.

  * Working with a 64-bit o/s does not
    automatically help, for many compilers
    encode `int' with 32 bits to improve
    performance; they only use 64 bits for type
    `long int'. Just compiling Enblend or
    Enfuse with a C++-compiler that uses 64 bits
    for an `int' on such systems, breaks the
    ABI, i.e., the resulting objects won't link
    against any of the support libraries.

kaefert (kaefert) wrote :

Thanks Christoph for the detailed response!

But I don't think that I've hit the MAX_INT Pixel limit with my Panoramas.
I'm trying to create 360 degree panoramas with pictures taken with my Canon Powershot SX280HS which has a 20x zoom lense.

A panorma that covers about 90 degrees of the sky that I was able to successfully stich together with hugin 2011.4.0.cf9be9344356 (included in the Ubuntu 13.04 repos) resulted in a tif file with the dimensions 91119x3771
4294967295 = 2^32-1
0343609749 = 91119*3771
1374438996 = 91119*3771*4
multiplied by for (90*4=360 degrees) this still leaves a lot of space before it could hit INT_MAX pixels.

If I could make a wish it would be a way to be able to have the zoom levels that I've got in the jpgs linked from within this panorama but directly within the SaladoPlayer.
http://kaefert.is-a-geek.org/SaladoPlayer-1.3.5/

Another thing that would be great for that purpose was if I could combine pictures of different zoom levels into a panroama with hugin.

Another problem: With hugin 2012 the assistant finds a lot of wrong control points between pictures that don't overlap - Is there some way to configure this assistant to only search for controlpoints within neighboring pictures - like it seems like the 2011 hugin does?

Christoph Spiel (cspiel) wrote :

        Oops! Sorry, I was off-by-one in the
exponent. 8-/ The largest _signed_ integer
(`int') typically is 2**31 - 1, not 2**32 - 1.
The latter is correct for _unsigned_ integers
(`unsigned int'). However, your calculation
still holds even with the half-as-large limit.

> Another thing that would be great for that
> purpose was if I could combine pictures of
> different zoom levels into a panorama with
> hugin.

        Enblend will always produce an image
(often called "panorama") at exactly one
resolution. Currently, no provisions are
available to generate data for multi-resolution
panorama viewers. A common way to circumvent
this limitation is to generate a panorama at the
highest possible resolution and feed it into a
3D, VR, or whatever multi-resolution format
generator, which -- again typically -- has
compatible viewer applications as side-kicks or
vice versa.

        With respect to the problem of combining
images at different resolutions into one
panorama. This is possible in the way that you
could upscale all of your "wide-angle" shots
until their resolution matches the one of your
most extreme telephoto image. Now you cut holes
with some overlap (for control-points, alignment
and for Enblend to blend the seam away) into the
wide-angle shots where the telephoto ones sit.
Otherwise Enblend complains that your telephoto
images do not add new content. Finally, you
blend all images together, where you pay
particular attention to the images' order.
"Wide-angle" shots -- the frames -- go first,
telephoto ones -- the pot-hole fillers -- go
last.

I have never created a panorama this way, but we
have a standard test case for Enblend comprising
of a pair of images, where the first image has a
hole and the second one fills it with generous
overlap. This test works perfectly well.
Maybe, Bruno Postle wants to chime in here, as
he often experiments with the boundaries of
Enblend.

> With hugin 2012 the assistant finds a lot of
> wrong control points between pictures that
> don't overlap. ...

        This question ought to go to the Hugin
newsgroup. It is misplaced here.

rew (r-e-wolff) wrote :

You don't need to scale images for them to be accepted into hugin.
Just make sure that hugin knows that it was taken with a different "lens".

kaefert (kaefert) wrote :

Okey guys! First I want to thank you all for the useful information you gave me, especially @rew for the thing with the different "lenses".

And secondly If anybody is still interested in debugging the issue of this bugentry: Today enblend failed on me again - I've been trying to create a panorama with the dimensions of 140000x3791 (or 140000x7858 before the crop configured in hugin if that matters) and enblend failed with the same out of memory error again.
I've put the project and the compressed intermediate tif files here:
http://kaefert.is-a-geek.org/misc/hugin/360degree233pics_enblend-failure/2013-10-10_2258_140k_enblend-failed/
The original jpgs can still be found in the parent folder, directory listing is activated so you can navigate there.

kaefert (kaefert) wrote :

Okey, so I'm still not ready to let this one go ;)
I've followed this guide: http://wiki.panotools.org/Hugin_Compiling_Ubuntu
To compile and install those on my machine:
libpano13 2.9.19-789hg
enblend 4.2-7bcf8a1e6b3d
hugin 2013.0.0.6337

Now with this setup, trying to create a panorma with more than 2**31 pixels will fail within minutes, and since Christoph said that this is simply not possible I'm gonna let that one go. Though why someone would address pixels with a signed integer type still puzzles me...

So I've settled to trying to achieve to create a panorama of the size 240.000 x 8.292 = 1.990.080.000 pixels, which is smaller than 2^31 = 2.147.483.648

My first try failed after a few hours when trying to enblend picture 66 of 229 - with an out of memory exception. Though unlike with previous "memory exceptions" this time the memory & swap space truly was completly full. (10GB RAM + 20GB Swap). So I've decided to make my swap partition a little bigger (100GB) and now enblend is already running for a little over 24 hours and is currently enblending picture 140 of 229 and the biggest memory usage I saw until now was 10GB RAM + 39GB swap = 49GB total.

Another Problem I've got is the introduction of artefacts by enblend.
With the self-compiled enblend version specified above I've created two panoramas yet - one 100.000 x 3.455 pixels, and one with 130.000 x 4.492 pixels, and both show artifacts like those
http://kaefert.is-a-geek.org/misc/hugin/360degree233pics_enblend-failure/artifacts-enblend-4-2/
though the bigger version has more of them, and more severe ones.

With the default Ubuntu 13.04 enblend package 4.0+dfsg-4ubuntu3 I've tried a lot of different sizes, and the biggest one without artefacts was 86.000 x 2.881 pixels, the biggest one that did not fail to enblend was 135.000 x 7.191. The artifacts that this old enblend version introduced into the panorama where hundreds of 1 pixel thick lines that where black, or dark gray or sometimes also nearly invisible because of nearly the same color as the panorama at that segment. The bigger the panorama the more of those lines where introduced.

kaefert (kaefert) wrote :

okey, so the enblending of my 240.000 x 8.292 pixel panorama succeed. The resulting tiff file is a little under 2GB big and sadly does contain lots of those http://kaefert.is-a-geek.org/misc/hugin/360degree233pics_enblend-failure/artifacts-enblend-4-2/ artefacts. I think this artefact problem I see might be related to this bug: https://bugs.launchpad.net/enblend/+bug/721136

Though I do not understand what @rew (r-e-wolff) said about disk swapping of images --> Is this the same as the this imagecache feature and does this still exist in the latest versions of enblend? From what you guys posted I thought it was a discontinued feature, but from what I read over at https://bugs.launchpad.net/enblend/+bug/721136 it sounds like my blackish artefacts stem from a bug in this imagecache thingy.

kaefert (kaefert) wrote :

btw., some info that might be useful for other users or maybe also for the developers:
The sources of this panorama are 229 pictures of 3000x4000pixels.
The highest amount of memory used by enblend was 10GB RAM + 42GB Swap-space.
The enblend process took around 50 hours to finish stiching this panorma on my Asus Zenbook UX32VD. (Intel i7-3517U @ 1.90GHz - Dual core, Swap Disk was a 100GB partition on my Transcend SSD320 256GB)

rew (r-e-wolff) wrote :

Yes, I've found that things are related to imagecache. On the other hand, yes, I too have heard that it had been removed. But are you running such a version? Can you check if you can make the program show its configuration? (-V? -v?) (I don't know the cmdline options by heart).

kaefert (kaefert) wrote :

couldn't find a way to do that, here is the output of --version and --help
http://pastebin.com/2n4YH8F4

kaefert (kaefert) wrote :

The enblend I'm running is compiled from the sources I've got on tuesday from
hg clone http://hg.code.sf.net/p/enblend/code enblend.hg
following those instructions:
http://wiki.panotools.org/Hugin_Compiling_Ubuntu#Building_Enblend

and @ this page:
http://hg.code.sf.net/p/enblend/code/shortlog/27edd6c31bd2
I found those changesets:
Sun, 09 Dec 2012 09:55:26 +0100 Chris Disable image cache.
Sun, 09 Dec 2012 09:54:35 +0100 Chris Make the "disable ImageCache" configuration the default.

so I think my enblend should not use this ImageCache feature. Also if it did, why would it use 52 gigabytes of memory?

kaefert (kaefert) wrote :

Okey, so I've worked around that artefact problem
http://kaefert.is-a-geek.org/misc/hugin/360degree233pics_enblend-failure/artifacts-enblend-4-2/

by using gimp to cut out transperant holes where those artefacts where and running enblend a second time with this holey panorama and those few pictures produced by nona to fill those holes as input.

Christoph Spiel (cspiel) wrote :

THX kaefert for trying out Enblend on large panos. You have
convinced me that we can close this issue, because "enblend
fails to blend large pano" is not a bug in the program. You
proved that it is plain user incompetence, using wanna-be O/Ss,
or, in summary nothing we can fix.

The sizes of more than 236MPixels and more than 926MPixels
respectively that you reported for your successful, final panos
are well inside our design goals.

- The "black line" artifacts are gone w/non-ImageCache versions.
  Issue #989908 may be unaffected, though.
- The "black hole" artifacts are covered by issue #721136.

I wish we had a "not a bug" status in LP. I'd prefer "Invalid" after
all of your investigations.

kaefert (kaefert) wrote :

@Christoph you're welcome.

Yes, if you define panoramas with more than 2^31 pixels as out of scope of what you are trying to achieve with hugin, I think you can change the status of this bug to "fixed" in enblend 4.2

I don't think invalid would be the proper state, since with enblend 4.0 (at least the one included in ubuntu) you truly can't enbend bigger panoramas also far below 2^31 pixels, so this bug truly existed in enblend 4.0

My biggest issue at the moment that I have not found any workaround yet is
https://bugs.launchpad.net/enblend/+bug/1193872
It would be great if someone could take a look at that.

Christoph Spiel (cspiel) on 2013-11-13
Changed in enblend:
status: Confirmed → Fix Committed
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers