Why does double in C print fewer decimal digits that C++?
Clash Royale CLAN TAG#URR8PPP
up vote
12
down vote
favorite
I have this code in C where I've declared 0.1 as double.
#include <stdio.h>
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000
Same code in C++,
#include <iostream>
using namespace std;
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625
What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?
Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
c++ c floating-point numbers precision
 |Â
show 6 more comments
up vote
12
down vote
favorite
I have this code in C where I've declared 0.1 as double.
#include <stdio.h>
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000
Same code in C++,
#include <iostream>
using namespace std;
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625
What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?
Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
c++ c floating-point numbers precision
1
Your C++ example seems to be missing include for theprintf
.
– user694733
1 hour ago
1
I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago
1
To useprintf
you need to include<stdio.h>
.
– Cheers and hth. - Alf
1 hour ago
1
@user694733 This is a MCVE. Compile with for examplegcc -std=c11 -pedantic-errors
andg++ -std=c++11 -pedantic-errors
. I'm able to reproduce the behavior on Mingw.
– Lundin
1 hour ago
1
g++/gcc (GCC) 8.2.1 both give0.10000000000000000555111512312578270211815834045410156250
so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
– David C. Rankin
1 hour ago
 |Â
show 6 more comments
up vote
12
down vote
favorite
up vote
12
down vote
favorite
I have this code in C where I've declared 0.1 as double.
#include <stdio.h>
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000
Same code in C++,
#include <iostream>
using namespace std;
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625
What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?
Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
c++ c floating-point numbers precision
I have this code in C where I've declared 0.1 as double.
#include <stdio.h>
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000
Same code in C++,
#include <iostream>
using namespace std;
int main()
double a = 0.1;
printf("a is %0.56fn", a);
return 0;
This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625
What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?
Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
c++ c floating-point numbers precision
c++ c floating-point numbers precision
edited 11 mins ago
RQM
1053
1053
asked 1 hour ago


Raghavendra Gujar
612
612
1
Your C++ example seems to be missing include for theprintf
.
– user694733
1 hour ago
1
I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago
1
To useprintf
you need to include<stdio.h>
.
– Cheers and hth. - Alf
1 hour ago
1
@user694733 This is a MCVE. Compile with for examplegcc -std=c11 -pedantic-errors
andg++ -std=c++11 -pedantic-errors
. I'm able to reproduce the behavior on Mingw.
– Lundin
1 hour ago
1
g++/gcc (GCC) 8.2.1 both give0.10000000000000000555111512312578270211815834045410156250
so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
– David C. Rankin
1 hour ago
 |Â
show 6 more comments
1
Your C++ example seems to be missing include for theprintf
.
– user694733
1 hour ago
1
I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago
1
To useprintf
you need to include<stdio.h>
.
– Cheers and hth. - Alf
1 hour ago
1
@user694733 This is a MCVE. Compile with for examplegcc -std=c11 -pedantic-errors
andg++ -std=c++11 -pedantic-errors
. I'm able to reproduce the behavior on Mingw.
– Lundin
1 hour ago
1
g++/gcc (GCC) 8.2.1 both give0.10000000000000000555111512312578270211815834045410156250
so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
– David C. Rankin
1 hour ago
1
1
Your C++ example seems to be missing include for the
printf
.– user694733
1 hour ago
Your C++ example seems to be missing include for the
printf
.– user694733
1 hour ago
1
1
I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago
I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago
1
1
To use
printf
you need to include <stdio.h>
.– Cheers and hth. - Alf
1 hour ago
To use
printf
you need to include <stdio.h>
.– Cheers and hth. - Alf
1 hour ago
1
1
@user694733 This is a MCVE. Compile with for example
gcc -std=c11 -pedantic-errors
and g++ -std=c++11 -pedantic-errors
. I'm able to reproduce the behavior on Mingw.– Lundin
1 hour ago
@user694733 This is a MCVE. Compile with for example
gcc -std=c11 -pedantic-errors
and g++ -std=c++11 -pedantic-errors
. I'm able to reproduce the behavior on Mingw.– Lundin
1 hour ago
1
1
g++/gcc (GCC) 8.2.1 both give
0.10000000000000000555111512312578270211815834045410156250
so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.– David C. Rankin
1 hour ago
g++/gcc (GCC) 8.2.1 both give
0.10000000000000000555111512312578270211815834045410156250
so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.– David C. Rankin
1 hour ago
 |Â
show 6 more comments
2 Answers
2
active
oldest
votes
up vote
6
down vote
With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.
This is a pretty weird case of Undefined Behavior.
In the C++ code change <iostream>
to <stdio.h>
, to get valid C++ code, and you get the same result as with the C program.
Why does the C++ code even compile?
Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream>
header drags in some declaration of printf
. Just not an entirely correct one.
Regarding
†Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.
1
I think you are on to the problem, iostream dragging in cstdio. But why does<cstdio
and<stdio.h>
give different results though? Thestdio.h
form is supposedly deprecated.
– Lundin
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
Could it be a Mingw issue? g++ with cstdio andstd::printf
gives the output as in the OP's question.stdio.h
gives the C output. Maybe the global namespaceprintf
means that the Microsoft C lib gets invoked, rather than the C++ one.
– Lundin
57 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes__mingw_vprintf
. Which as I understand it is the internal implementation ofprintf
. However the prefix__mingw
makes me wonder. In the C code's assembly it's justprintf
directly.
– Cheers and hth. - Alf
51 mins ago
 |Â
show 2 more comments
up vote
0
down vote
It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.
I also see that both numbers are equal to 0.1
within 52 bits of precision, so both are correct.
That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double
doesn't store any decimals. It stores bits. The decimals are generated.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.
This is a pretty weird case of Undefined Behavior.
In the C++ code change <iostream>
to <stdio.h>
, to get valid C++ code, and you get the same result as with the C program.
Why does the C++ code even compile?
Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream>
header drags in some declaration of printf
. Just not an entirely correct one.
Regarding
†Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.
1
I think you are on to the problem, iostream dragging in cstdio. But why does<cstdio
and<stdio.h>
give different results though? Thestdio.h
form is supposedly deprecated.
– Lundin
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
Could it be a Mingw issue? g++ with cstdio andstd::printf
gives the output as in the OP's question.stdio.h
gives the C output. Maybe the global namespaceprintf
means that the Microsoft C lib gets invoked, rather than the C++ one.
– Lundin
57 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes__mingw_vprintf
. Which as I understand it is the internal implementation ofprintf
. However the prefix__mingw
makes me wonder. In the C code's assembly it's justprintf
directly.
– Cheers and hth. - Alf
51 mins ago
 |Â
show 2 more comments
up vote
6
down vote
With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.
This is a pretty weird case of Undefined Behavior.
In the C++ code change <iostream>
to <stdio.h>
, to get valid C++ code, and you get the same result as with the C program.
Why does the C++ code even compile?
Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream>
header drags in some declaration of printf
. Just not an entirely correct one.
Regarding
†Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.
1
I think you are on to the problem, iostream dragging in cstdio. But why does<cstdio
and<stdio.h>
give different results though? Thestdio.h
form is supposedly deprecated.
– Lundin
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
Could it be a Mingw issue? g++ with cstdio andstd::printf
gives the output as in the OP's question.stdio.h
gives the C output. Maybe the global namespaceprintf
means that the Microsoft C lib gets invoked, rather than the C++ one.
– Lundin
57 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes__mingw_vprintf
. Which as I understand it is the internal implementation ofprintf
. However the prefix__mingw
makes me wonder. In the C code's assembly it's justprintf
directly.
– Cheers and hth. - Alf
51 mins ago
 |Â
show 2 more comments
up vote
6
down vote
up vote
6
down vote
With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.
This is a pretty weird case of Undefined Behavior.
In the C++ code change <iostream>
to <stdio.h>
, to get valid C++ code, and you get the same result as with the C program.
Why does the C++ code even compile?
Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream>
header drags in some declaration of printf
. Just not an entirely correct one.
Regarding
†Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.
With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.
This is a pretty weird case of Undefined Behavior.
In the C++ code change <iostream>
to <stdio.h>
, to get valid C++ code, and you get the same result as with the C program.
Why does the C++ code even compile?
Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream>
header drags in some declaration of printf
. Just not an entirely correct one.
Regarding
†Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?
... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.
answered 1 hour ago


Cheers and hth. - Alf
121k10155260
121k10155260
1
I think you are on to the problem, iostream dragging in cstdio. But why does<cstdio
and<stdio.h>
give different results though? Thestdio.h
form is supposedly deprecated.
– Lundin
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
Could it be a Mingw issue? g++ with cstdio andstd::printf
gives the output as in the OP's question.stdio.h
gives the C output. Maybe the global namespaceprintf
means that the Microsoft C lib gets invoked, rather than the C++ one.
– Lundin
57 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes__mingw_vprintf
. Which as I understand it is the internal implementation ofprintf
. However the prefix__mingw
makes me wonder. In the C code's assembly it's justprintf
directly.
– Cheers and hth. - Alf
51 mins ago
 |Â
show 2 more comments
1
I think you are on to the problem, iostream dragging in cstdio. But why does<cstdio
and<stdio.h>
give different results though? Thestdio.h
form is supposedly deprecated.
– Lundin
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
Could it be a Mingw issue? g++ with cstdio andstd::printf
gives the output as in the OP's question.stdio.h
gives the C output. Maybe the global namespaceprintf
means that the Microsoft C lib gets invoked, rather than the C++ one.
– Lundin
57 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes__mingw_vprintf
. Which as I understand it is the internal implementation ofprintf
. However the prefix__mingw
makes me wonder. In the C code's assembly it's justprintf
directly.
– Cheers and hth. - Alf
51 mins ago
1
1
I think you are on to the problem, iostream dragging in cstdio. But why does
<cstdio
and <stdio.h>
give different results though? The stdio.h
form is supposedly deprecated.– Lundin
1 hour ago
I think you are on to the problem, iostream dragging in cstdio. But why does
<cstdio
and <stdio.h>
give different results though? The stdio.h
form is supposedly deprecated.– Lundin
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
– user694733
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
@Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
– Cheers and hth. - Alf
1 hour ago
Could it be a Mingw issue? g++ with cstdio and
std::printf
gives the output as in the OP's question. stdio.h
gives the C output. Maybe the global namespace printf
means that the Microsoft C lib gets invoked, rather than the C++ one.– Lundin
57 mins ago
Could it be a Mingw issue? g++ with cstdio and
std::printf
gives the output as in the OP's question. stdio.h
gives the C output. Maybe the global namespace printf
means that the Microsoft C lib gets invoked, rather than the C++ one.– Lundin
57 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes
__mingw_vprintf
. Which as I understand it is the internal implementation of printf
. However the prefix __mingw
makes me wonder. In the C code's assembly it's just printf
directly.– Cheers and hth. - Alf
51 mins ago
@Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes
__mingw_vprintf
. Which as I understand it is the internal implementation of printf
. However the prefix __mingw
makes me wonder. In the C code's assembly it's just printf
directly.– Cheers and hth. - Alf
51 mins ago
 |Â
show 2 more comments
up vote
0
down vote
It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.
I also see that both numbers are equal to 0.1
within 52 bits of precision, so both are correct.
That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double
doesn't store any decimals. It stores bits. The decimals are generated.
add a comment |Â
up vote
0
down vote
It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.
I also see that both numbers are equal to 0.1
within 52 bits of precision, so both are correct.
That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double
doesn't store any decimals. It stores bits. The decimals are generated.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.
I also see that both numbers are equal to 0.1
within 52 bits of precision, so both are correct.
That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double
doesn't store any decimals. It stores bits. The decimals are generated.
It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.
I also see that both numbers are equal to 0.1
within 52 bits of precision, so both are correct.
That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double
doesn't store any decimals. It stores bits. The decimals are generated.
answered 1 hour ago
MSalters
131k8114264
131k8114264
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f52660296%2fwhy-does-double-in-c-print-fewer-decimal-digits-that-c%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Your C++ example seems to be missing include for the
printf
.– user694733
1 hour ago
1
I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago
1
To use
printf
you need to include<stdio.h>
.– Cheers and hth. - Alf
1 hour ago
1
@user694733 This is a MCVE. Compile with for example
gcc -std=c11 -pedantic-errors
andg++ -std=c++11 -pedantic-errors
. I'm able to reproduce the behavior on Mingw.– Lundin
1 hour ago
1
g++/gcc (GCC) 8.2.1 both give
0.10000000000000000555111512312578270211815834045410156250
so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.– David C. Rankin
1 hour ago