cannot allocate memory

Bug #44676 reported by Mark on 2006-05-14
Affects Status Importance Assigned to Milestone
afbackup (Ubuntu)

Bug Description

Binary package hint: afbackup

No clients are able to connect to afbackup server. When I try with telnet, the servers responses with an error message "cannot allocate memory" and some other messages follow like "cannot read file xy".

I've searched for this error message at afbackup's mailing list archiv and found a similar problem in November 2004. This is an answer to that post:

I ran into the same problem on a debian testing/unstable system. Version 3.3.8-1 gave me this "cannot allocate memory" message. So I downgraded to 3.3.6 which wrote " must be installed for pthread_cancel to work" to the server log.
After upgrading libgcc1 to unstable things work fine again.
You may want to check whether your gcc support lib is up-to-date.

This is the link:

I am using up2date dapper drake.


Changed in afbackup:
assignee: nobody → motu
Daniel T Chen (crimsun) wrote :

Is this symptom reproducible in 8.10 alpha?

Changed in afbackup:
status: New → Incomplete
Download full text (3.3 KiB)

Ive not tried building on 8.10 (I might try it as I have a server already built with Intrepid on)

Having said that the problem is caused at run time by a failure in get_hostnamestr() in inetutils.c which is meant to return a string.
There is a section in the definition starting #ifdef HAVE_IP6 which is missing from the compiled code because HAVE_IP6 is not defined properly on some platforms (see next para).
The net result an else clause gets executed resulting in the function returning NULL.
This is used in server.c main() around line 4060 where there is function call :
  EM__(remotehost = get_connected_peername(commrfd))
which indirectly calls get_hostnamestr()
The EM__ macro mis-interprets the remotehost=NULL response and misleadingly reports an Out of Memory error.
This should be improved - the macro shouldnt be used in this context.

Having said that the issue is actually triggered by the HAVE_IP6.
On my 64 bit Hardy system strace shows that the getpeername() library call is as follows:
getpeername(sa_family=AF_INET6, sin6_port=htons(55263), inet_pton(AF_INET6, "::ffff:", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
As you can see it is using AF_INET6 family. This is what the missing piece of code mentioned above is meant to deal with.

Why is HAVE_IP6 not defined?
Because when built "configure" fails to detect ipv6 running. It fails with

configure:5450: cc -o conftest -g -O2 -g -Wall -O2 -Wl,-Bsymbolic-functions conftest.c -lpthread 1>&5
configure: In function 'sigh':
configure:5441: warning: implicit declaration of function 'exit'
configure:5441: warning: incompatible implicit declaration of built-in function 'exit'
configure: In function 'main':
configure:5446: warning: implicit declaration of function 'getmntent'
configure:5532: checking for IPv6 implementation
configure:5546: cc -o conftest -g -O2 -g -Wall -O2 -Wl,-Bsymbolic-functions conftest.c -lpthread 1>&5
In file included from /usr/include/netinet/in.h:24,
                 from configure:5539:
/usr/include/stdint.h:49: error: duplicate 'unsigned'
/usr/include/stdint.h:49: error: two or more data types in declaration specifiers
/usr/include/stdint.h:50: error: duplicate 'unsigned'
/usr/include/stdint.h:50: error: duplicate 'short'
/usr/include/stdint.h:52: error: duplicate 'unsigned'
configure: In function 'main':
configure:5542: warning: unused variable 'sock6'
configure:5542: warning: unused variable 'in6'
configure: failed program was:
#line 5537 "configure"
#include "confdefs.h"

#include <netinet/in.h>

int main() {
 struct in6_addr in6; struct sockaddr_in6 sock6;
; return 0; }

This has nothing to do with the ipv6 test being performed but due (my guess) to a compiler issue (It is using gcc 4.2.3)
Strangely compiling the same code outside configure on the command line DOESNT generate the same error.

I'm not sure how we fix it properly. Maybe someone else knows the answer to that....

As a quick interim fix you could switch of ipv6 on your PC (a little severe). That would avoid the system going through the bad piece of code.

I've temporarily fixed it by tweaking configure always to set ac_cv_struct_in6_addr=yes
whether or not the test fai...


Its not a compiler issue - the "duplicate" errors come from confdefs.h
Explicit tests are done earlier in configure and these add
#define uint32_t unsigned
#define uint16_t unsigned short
#define uint8_t unsigned char
 but not sure why they were added in preference to #including <stdint.h> in the appropriate place(s)
Resolving this aspect is the correct way to fix the problem....

Further investigation shows it is an out of date (well inappropriate for the current Ubuntu) configure script. Although the is present with the source code the build system isnt recreating configure but using the existing one instead.

In passing I note that is using deprecated macros -e.g
AC_CHECK_TYPE(uint8_t, unsigned char)
rather than

For Ubuntu you can get round these problems by rebuilding configure and then recreating the package.

There may be a way you can force this with apt automatically but the steps to rebuild configure manually are:
download the source and dependencies using
apt-get build-dep afbackup
apt-get source afbackup
Unpack the source and cd into the directory and delete configure (and config.cache and config.setup if present).
run autoconf

Autoconf spots the old macros and either replaces them (or uses them as is) but at least generates a better code fragment.

You can then rebuild the package. The steps are:
from the parent directory apt-get -b source afbackup (or if you prefer dpkg-buildpackage -rfakeroot -uc -b from within the directory)
Then install it with
dpkg -i afbackup_3.5.1pl2-3_amd64.deb

I will pass these observations upstream to the authors of afbackup as they may wish to re-publish configure or change the macros. Equally they may wish to change the way the code handles the error condition that gave the out of memory error.


Thank you for posting this bug.

Is this an issue in Lucid?

I will have to check on a different machine. The server I am using it on at present is still running karmic.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers