gcvt and qgcvt do not always provide requested precsion
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
glibc (Ubuntu) |
Won't Fix
|
Undecided
|
Unassigned |
Bug Description
On Ubuntu 20.04.1 LTS, glibc-2.31:
gcvt() will output no more than 17 digits of precision.
qgcvt() will output no more than 21 digits of precision.
Here is the demo :
/******
#include <stdio.h>
#include <stdlib.h>
int main(void) {
char ebuf[80];
gcvt(0.1, 55, ebuf);
printf("%s\n", ebuf);
qgcvt(0.1L, 67, ebuf);
printf("%s\n", ebuf);
return 0;
}
/******
I got:
0.10000000000000001
0.1000000000000
I expected:
0.1000000000000
0.1000000000000
The "expected" values are exact base 10 representations of the values contained in the double 0.1, and in the (80-bit extended precision) long double 0.1.
The same problem existed on Ubuntu-18.04. I expect it is a longstanding issue
Cheers,
Rob
The first result is slightly less, the second one is slightly more accurate representation of 0.1 compared to the exected value thanks to the rounding.
IMO it makes sense to omit the digits that are below the accuracy limit and the man page does not suggest the opposite.
If you deeply believe that glibc needs to be fixed in this aspect please report the issue upstream, because this is not a deliberate change in the Ubuntu packaging.
Out of curiosity I've checked the results on Fedora and they are the same:
[root@fedora ~]# cat > test.c
#include <stdio.h>
#include <stdlib.h>
int main(void) {
char ebuf[80];
gcvt(0.1, 55, ebuf);
printf("%s\n", ebuf);
qgcvt(0.1L, 67, ebuf);
printf("%s\n", ebuf);
return 0; 00000001
}
[root@fedora ~]# gcc test.c
[root@fedora ~]# ./a.out
0.10000000000000001
0.1000000000000