Why does double in C print fewer decimal digits that C++?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
12
down vote

favorite












I have this code in C where I've declared 0.1 as double.



#include <stdio.h> 
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000



Same code in C++,



#include <iostream>
using namespace std;
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625



What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?



Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?










share|improve this question



















  • 1




    Your C++ example seems to be missing include for the printf.
    – user694733
    1 hour ago






  • 1




    I think the question is rather why gcc and g++ give different results? They shouldn't.
    – Lundin
    1 hour ago






  • 1




    To use printf you need to include <stdio.h>.
    – Cheers and hth. - Alf
    1 hour ago






  • 1




    @user694733 This is a MCVE. Compile with for example gcc -std=c11 -pedantic-errors and g++ -std=c++11 -pedantic-errors. I'm able to reproduce the behavior on Mingw.
    – Lundin
    1 hour ago






  • 1




    g++/gcc (GCC) 8.2.1 both give 0.10000000000000000555111512312578270211815834045410156250 so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
    – David C. Rankin
    1 hour ago














up vote
12
down vote

favorite












I have this code in C where I've declared 0.1 as double.



#include <stdio.h> 
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000



Same code in C++,



#include <iostream>
using namespace std;
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625



What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?



Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?










share|improve this question



















  • 1




    Your C++ example seems to be missing include for the printf.
    – user694733
    1 hour ago






  • 1




    I think the question is rather why gcc and g++ give different results? They shouldn't.
    – Lundin
    1 hour ago






  • 1




    To use printf you need to include <stdio.h>.
    – Cheers and hth. - Alf
    1 hour ago






  • 1




    @user694733 This is a MCVE. Compile with for example gcc -std=c11 -pedantic-errors and g++ -std=c++11 -pedantic-errors. I'm able to reproduce the behavior on Mingw.
    – Lundin
    1 hour ago






  • 1




    g++/gcc (GCC) 8.2.1 both give 0.10000000000000000555111512312578270211815834045410156250 so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
    – David C. Rankin
    1 hour ago












up vote
12
down vote

favorite









up vote
12
down vote

favorite











I have this code in C where I've declared 0.1 as double.



#include <stdio.h> 
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000



Same code in C++,



#include <iostream>
using namespace std;
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625



What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?



Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?










share|improve this question















I have this code in C where I've declared 0.1 as double.



#include <stdio.h> 
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000



Same code in C++,



#include <iostream>
using namespace std;
int main()
double a = 0.1;

printf("a is %0.56fn", a);
return 0;



This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625



What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?



Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?







c++ c floating-point numbers precision






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 11 mins ago









RQM

1053




1053










asked 1 hour ago









Raghavendra Gujar

612




612







  • 1




    Your C++ example seems to be missing include for the printf.
    – user694733
    1 hour ago






  • 1




    I think the question is rather why gcc and g++ give different results? They shouldn't.
    – Lundin
    1 hour ago






  • 1




    To use printf you need to include <stdio.h>.
    – Cheers and hth. - Alf
    1 hour ago






  • 1




    @user694733 This is a MCVE. Compile with for example gcc -std=c11 -pedantic-errors and g++ -std=c++11 -pedantic-errors. I'm able to reproduce the behavior on Mingw.
    – Lundin
    1 hour ago






  • 1




    g++/gcc (GCC) 8.2.1 both give 0.10000000000000000555111512312578270211815834045410156250 so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
    – David C. Rankin
    1 hour ago












  • 1




    Your C++ example seems to be missing include for the printf.
    – user694733
    1 hour ago






  • 1




    I think the question is rather why gcc and g++ give different results? They shouldn't.
    – Lundin
    1 hour ago






  • 1




    To use printf you need to include <stdio.h>.
    – Cheers and hth. - Alf
    1 hour ago






  • 1




    @user694733 This is a MCVE. Compile with for example gcc -std=c11 -pedantic-errors and g++ -std=c++11 -pedantic-errors. I'm able to reproduce the behavior on Mingw.
    – Lundin
    1 hour ago






  • 1




    g++/gcc (GCC) 8.2.1 both give 0.10000000000000000555111512312578270211815834045410156250 so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
    – David C. Rankin
    1 hour ago







1




1




Your C++ example seems to be missing include for the printf.
– user694733
1 hour ago




Your C++ example seems to be missing include for the printf.
– user694733
1 hour ago




1




1




I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago




I think the question is rather why gcc and g++ give different results? They shouldn't.
– Lundin
1 hour ago




1




1




To use printf you need to include <stdio.h>.
– Cheers and hth. - Alf
1 hour ago




To use printf you need to include <stdio.h>.
– Cheers and hth. - Alf
1 hour ago




1




1




@user694733 This is a MCVE. Compile with for example gcc -std=c11 -pedantic-errors and g++ -std=c++11 -pedantic-errors. I'm able to reproduce the behavior on Mingw.
– Lundin
1 hour ago




@user694733 This is a MCVE. Compile with for example gcc -std=c11 -pedantic-errors and g++ -std=c++11 -pedantic-errors. I'm able to reproduce the behavior on Mingw.
– Lundin
1 hour ago




1




1




g++/gcc (GCC) 8.2.1 both give 0.10000000000000000555111512312578270211815834045410156250 so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
– David C. Rankin
1 hour ago




g++/gcc (GCC) 8.2.1 both give 0.10000000000000000555111512312578270211815834045410156250 so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined.
– David C. Rankin
1 hour ago












2 Answers
2






active

oldest

votes

















up vote
6
down vote













With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.



This is a pretty weird case of Undefined Behavior.



In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.




Why does the C++ code even compile?



Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.




Regarding




” Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?




... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.






share|improve this answer
















  • 1




    I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
    – Lundin
    1 hour ago











  • C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
    – user694733
    1 hour ago










  • @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
    – Cheers and hth. - Alf
    1 hour ago










  • Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
    – Lundin
    57 mins ago










  • @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
    – Cheers and hth. - Alf
    51 mins ago

















up vote
0
down vote













It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.



I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.



That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.






share|improve this answer




















    Your Answer





    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f52660296%2fwhy-does-double-in-c-print-fewer-decimal-digits-that-c%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    6
    down vote













    With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.



    This is a pretty weird case of Undefined Behavior.



    In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.




    Why does the C++ code even compile?



    Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.




    Regarding




    ” Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?




    ... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.






    share|improve this answer
















    • 1




      I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
      – Lundin
      1 hour ago











    • C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
      – user694733
      1 hour ago










    • @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
      – Cheers and hth. - Alf
      1 hour ago










    • Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
      – Lundin
      57 mins ago










    • @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
      – Cheers and hth. - Alf
      51 mins ago














    up vote
    6
    down vote













    With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.



    This is a pretty weird case of Undefined Behavior.



    In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.




    Why does the C++ code even compile?



    Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.




    Regarding




    ” Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?




    ... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.






    share|improve this answer
















    • 1




      I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
      – Lundin
      1 hour ago











    • C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
      – user694733
      1 hour ago










    • @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
      – Cheers and hth. - Alf
      1 hour ago










    • Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
      – Lundin
      57 mins ago










    • @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
      – Cheers and hth. - Alf
      51 mins ago












    up vote
    6
    down vote










    up vote
    6
    down vote









    With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.



    This is a pretty weird case of Undefined Behavior.



    In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.




    Why does the C++ code even compile?



    Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.




    Regarding




    ” Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?




    ... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.






    share|improve this answer












    With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.



    This is a pretty weird case of Undefined Behavior.



    In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.




    Why does the C++ code even compile?



    Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.




    Regarding




    ” Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?




    ... it's just the decimal presentation that's longer. It can be arbitrarily long. But the digits beyond the precision of the internal representation, are essentially garbage.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered 1 hour ago









    Cheers and hth. - Alf

    121k10155260




    121k10155260







    • 1




      I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
      – Lundin
      1 hour ago











    • C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
      – user694733
      1 hour ago










    • @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
      – Cheers and hth. - Alf
      1 hour ago










    • Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
      – Lundin
      57 mins ago










    • @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
      – Cheers and hth. - Alf
      51 mins ago












    • 1




      I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
      – Lundin
      1 hour ago











    • C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
      – user694733
      1 hour ago










    • @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
      – Cheers and hth. - Alf
      1 hour ago










    • Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
      – Lundin
      57 mins ago










    • @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
      – Cheers and hth. - Alf
      51 mins ago







    1




    1




    I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
    – Lundin
    1 hour ago





    I think you are on to the problem, iostream dragging in cstdio. But why does <cstdio and <stdio.h> give different results though? The stdio.h form is supposedly deprecated.
    – Lundin
    1 hour ago













    C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
    – user694733
    1 hour ago




    C++ compilation seems to be version dependent. My version (4.9.2 (tdm-1)) refused to compile at all.
    – user694733
    1 hour ago












    @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
    – Cheers and hth. - Alf
    1 hour ago




    @Lundin: I haven't investigated the technical difference with the gcc headers. Regarding "deprecated", yes since C++98. That doesn't prevent one from using these forms, since they can't disappear.
    – Cheers and hth. - Alf
    1 hour ago












    Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
    – Lundin
    57 mins ago




    Could it be a Mingw issue? g++ with cstdio and std::printf gives the output as in the OP's question. stdio.h gives the C output. Maybe the global namespace printf means that the Microsoft C lib gets invoked, rather than the C++ one.
    – Lundin
    57 mins ago












    @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
    – Cheers and hth. - Alf
    51 mins ago




    @Lundin: Good idea. But. In the C++ code's assembly the invoked function is generated one, that in turn invokes __mingw_vprintf. Which as I understand it is the internal implementation of printf. However the prefix __mingw makes me wonder. In the C code's assembly it's just printf directly.
    – Cheers and hth. - Alf
    51 mins ago












    up vote
    0
    down vote













    It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.



    I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.



    That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.






    share|improve this answer
























      up vote
      0
      down vote













      It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.



      I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.



      That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.






      share|improve this answer






















        up vote
        0
        down vote










        up vote
        0
        down vote









        It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.



        I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.



        That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.






        share|improve this answer












        It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.



        I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.



        That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 1 hour ago









        MSalters

        131k8114264




        131k8114264



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f52660296%2fwhy-does-double-in-c-print-fewer-decimal-digits-that-c%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What does second last employer means? [closed]

            List of Gilmore Girls characters

            Confectionery