who command gets "who: memory exhausted" for certain inputs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
GLibC |
Fix Released
|
Medium
|
|||
eglibc (Debian) |
Fix Released
|
Unknown
|
|||
eglibc (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned | ||
Lucid |
Won't Fix
|
Medium
|
Adam Conrad | ||
Precise |
Fix Released
|
Medium
|
Adam Conrad | ||
Quantal |
Won't Fix
|
Medium
|
Adam Conrad |
Bug Description
SRU Justification:
[Impact]
* When using who, certain characters can cause issues with eglibc's vfprintf causing memory issues and who to print 'memory exhausted'.
[Test Case]
* Download wtmp.clean.
* locale-gen en_US.UTF-8 # if necessary
* LANG=en_US.UTF-8 who wtmp.clean
* If you see 'who: memory exhausted' the test failed.
[Regression Potential]
* This patch reverts a change that fixes the issue in BZ #6530.
* The patch also adds a test case to check for handling of incomplete multi-byte characters.
* upstream patch URL: http://
--
* Description:
When running who with the attached file we get an error of "who: memory exhausted".
$ who wtmp.clean
This works fine in newer versions of eglibc. I was able to determine that coreutils was not the problem
by using the precise version of coreutils with the raring eglibc version. In that case the problem went away.
In addition I've compiled the precise version for raring, and the problem is not present.
* Versions affected:
This affects current Lucid, Oneiric, Precise and Quantal eglibc versions.
2.11.1-0ubuntu7.12
2.13-20ubuntu5.3
2.15-0ubuntu10.4
2.15-0ubuntu20.1
But does not affect Raring eglibc ( 2.17-0ubuntu1 )
affects: | eglibc → glibc |
description: | updated |
Changed in eglibc (Ubuntu Lucid): | |
importance: | Undecided → Medium |
status: | New → In Progress |
Changed in glibc: | |
importance: | Unknown → Medium |
status: | Unknown → Fix Released |
Changed in eglibc (Debian): | |
status: | Unknown → Fix Released |
Changed in eglibc (Ubuntu Lucid): | |
status: | In Progress → Won't Fix |
With the following testcase, it happens while it shouldn't, according to
the manual:
-----8<-------
#include <stdio.h>
#include <locale.h>
#define STR "²éľÂíɱ ²¡¶¾£¬ÖܺèµtÄúµ Ä360²»× ¨Òµ£¡"
int main(void) {
setlocale( LC_ALL, "");
printf( "%d\n", snprintf(buf, 150, "%.50s", STR));
char buf[200];
return 0;
}
----->8-------
The manual page has this to say:
About precision:
An optional precision, in the form of a period (‘.’) followed by an
optional decimal digit string.(...) This gives (...) the maximum
number of characters to be printed from a string for s and S
conversions.
About s:
If no l modifier is present: The const char * argument is expected to
be a pointer to an array of character type(...)
If an l modifier is present: The const wchar_t * argument is expected
to be a pointer to an array of wide characters. Wide characters from
the array are converted to multibyte characters (...)
There is no "l" modifier, but still, the string goes through the
multibyte conversion code, and fails because the string is invalid
multibyte.
Note, it only works with non UTF-8 locale set in LC_CTYPE or LC_ALL.
This is debian bug http:// bugs.debian. org/208308