Why not use fractions instead of floating point?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?










share|improve this question























  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    4 hours ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    3 hours ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    3 hours ago






  • 2




    Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    3 hours ago










  • So, fixed point arithmetic?
    – Polygnome
    1 hour ago














up vote
1
down vote

favorite












Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?










share|improve this question























  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    4 hours ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    3 hours ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    3 hours ago






  • 2




    Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    3 hours ago










  • So, fixed point arithmetic?
    – Polygnome
    1 hour ago












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?










share|improve this question















Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?







software floating-point






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 23 mins ago

























asked 4 hours ago









Wilson

8,596437107




8,596437107











  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    4 hours ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    3 hours ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    3 hours ago






  • 2




    Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    3 hours ago










  • So, fixed point arithmetic?
    – Polygnome
    1 hour ago
















  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    4 hours ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    3 hours ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    3 hours ago






  • 2




    Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    3 hours ago










  • So, fixed point arithmetic?
    – Polygnome
    1 hour ago















Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
4 hours ago




Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
4 hours ago












@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
– Wilson
3 hours ago




@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
– Wilson
3 hours ago












And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
– Wilson
3 hours ago




And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
– Wilson
3 hours ago




2




2




Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
3 hours ago




Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
3 hours ago












So, fixed point arithmetic?
– Polygnome
1 hour ago




So, fixed point arithmetic?
– Polygnome
1 hour ago










5 Answers
5






active

oldest

votes

















up vote
10
down vote













When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






share|improve this answer



























    up vote
    4
    down vote














    My question is, why not use fractions?




    Quick answer:



    • Too much code needed

    • dynamic storage needed

    • long representation even for simple numbers

    • Complex and slow execution

    And most prominent:



    • Because floating point is already based on fractions: Binary ones, the kind a binary computer handles best.


    Long Form:



    The mantissa of a floating point number is a sequence of fractions to the base of two:



    1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 ...


    Each holding a zero or one denoting if that fraction is present. So dislaying 3/8th gives the sequence 0110000...




    A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend.




    That 'whatever size' is eventually the most important argument against. An easy to implement system does need a representation as short as possible to save on memory usage - that's a premium, especialy oearly machines - and it should use fixed size units, so memory management can be as simple as possible.



    One byte each may not realy be good to represent the needed fractions, resulting in a rather complicated puzzle of normalisation, which to be handled needs a rather lage amount of divisions. A realy costly operation (Hard or Softwarewise). In addition to the storage problem for the numbers, this would require even more address space to hold the non trivial handling routines.



    Binary (*1) floating point is based on your idea of fractions, but takes it to the logical end. With binary FP there is no need for many complex operations.



    • Turning decimal FP into binary is just a series of shift operations.

    • Returning it to decimal (*2) is again just shifting plus additions

    • Adding - or subracting - two numbers does only need a binary integer addition after shifting the lesser one to the right.

    • Multiplying - or dividing - means multiplication - or division - of a these two fixed point integers and addition of the exponent.

    All complex issues get reduced to fixed length integer operations. Not only the most simple form, but also exactly what binary CPUs can do best. And while that length can be taylored to the job (*3), already rather thrifty ones (size wise) with just 4 bytes storage need will cover most of everyday needs. And extending that to 5,6 or 8 gives a precicion rarely needed (*4).




    And probably we all remember how to add or subtract fractions from school.




    No, we don't realy, as that was something only mentioned for short time during third grade. Keep in mind most of the world already went (decimal) floating point more than a century ago.




    *1 - Or similar systems, like IBMs base 16 floating point used in the /360 series. Here the basic storage unit isn't a bit but a nibble, acknowledgeing that the memory is byte and parts of the machine nibble orientated.



    *2 - The least often done operation.



    *3 - Already 16 bit floating point can be useful for everyday issues. I even themember an application with a 8 bit float format used to scale priorities.



    *4 - Yes, there are be use cases where either more precission or a different system is needed for accurate/needed results, but their number is small and special - or already covered by integers:)






    share|improve this answer




















    • By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
      – Wilson
      1 hour ago










    • @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
      – Raffzahn
      1 hour ago


















    up vote
    1
    down vote













    There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



    Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



    If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



    Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






    share|improve this answer




















    • I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
      – pipe
      1 min ago

















    up vote
    0
    down vote













    dirkt and alephzero provided definitive general responses. Mine is focused on one element of your question, the structure used to store a fraction. You proposed something on the order of:



    struct mixedFrac 
    int whole;
    uchar dividend;
    uchar divisor;



    Note that this pre-limits general accuracy; there is only so much precision an 8-bit divisor can depended on to provide. On the one hand, it could be perfect; 78 2/3 could be represented with no loss, unlike with floating point. Or it could provide great (but not perfect) accuracy, such as pi at 3 16/113 (accurate to 6 decimal places). On the other hand, an infinite quantity of numbers would be far from accurate. Imagine trying to represent 1 1/1024. Within the structure, the closest you could get would be 1 0/255.



    The proposed structure could be easily simplified and improved with the use of a different structure:



    struct simpFrac 
    long dividend;
    ulong divisor;



    Since the divisor is now represented with much more precision, a much wider span of general accuracy is allowed. And now that approximation to pi can be shown in traditional fashion, 355/113.



    But as the others have pointed out, as you use these perfect to very accurate fractions, you lose precision quickly, plus the overhead of maintaining the "best" divisor for the result is quite costly, especially if a primary goal of using fractions was to keep things more efficient than floating point.






    share|improve this answer




















    • Woah there! Looks like you doubly posted your answer!
      – Wilson
      2 hours ago










    • @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
      – RichF
      1 hour ago


















    up vote
    0
    down vote













    Point by point.



    • faster addition/subtraction

      No: 8/15 + 5/7 is evaluated as 131/105 [(8*7 + 15*5)/(7*15)], so 3 multiplications for one single addition/subtraction. Plus possibly reduction

    • easier to print

      No: you have to print a human readable string of digits. So you must transform 131/105 to 1.247... Or are you proposing to simply display the fraction? Not so useful for the user.
      PRINT 12/25 --> RESULT IS 12/25


    • less code

      No: floating point code is compact, it's just shifting all in all


    • easy to smoosh those values together into an array subscript (for sine tables etc)

      I don't understand what you mean. Floating point 32 or 64 bit values can be packed togeter easily


    • easier for laypeople to grok

      Irrelevant, laypoepole do not program the bios of microcomputers. And statistics tell us that most of laypeople do not understand fractions anyway


    • x - x == 0

      The same in floating point





    share








    New contributor




    edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

















    • Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
      – Wilson
      2 mins ago











    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "648"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7810%2fwhy-not-use-fractions-instead-of-floating-point%23new-answer', 'question_page');

    );

    Post as a guest






























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    10
    down vote













    When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



    Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



    Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



    So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






    share|improve this answer
























      up vote
      10
      down vote













      When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



      Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



      Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



      So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






      share|improve this answer






















        up vote
        10
        down vote










        up vote
        10
        down vote









        When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



        Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



        Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



        So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






        share|improve this answer












        When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



        Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



        Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



        So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 3 hours ago









        dirkt

        7,38311840




        7,38311840




















            up vote
            4
            down vote














            My question is, why not use fractions?




            Quick answer:



            • Too much code needed

            • dynamic storage needed

            • long representation even for simple numbers

            • Complex and slow execution

            And most prominent:



            • Because floating point is already based on fractions: Binary ones, the kind a binary computer handles best.


            Long Form:



            The mantissa of a floating point number is a sequence of fractions to the base of two:



            1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 ...


            Each holding a zero or one denoting if that fraction is present. So dislaying 3/8th gives the sequence 0110000...




            A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend.




            That 'whatever size' is eventually the most important argument against. An easy to implement system does need a representation as short as possible to save on memory usage - that's a premium, especialy oearly machines - and it should use fixed size units, so memory management can be as simple as possible.



            One byte each may not realy be good to represent the needed fractions, resulting in a rather complicated puzzle of normalisation, which to be handled needs a rather lage amount of divisions. A realy costly operation (Hard or Softwarewise). In addition to the storage problem for the numbers, this would require even more address space to hold the non trivial handling routines.



            Binary (*1) floating point is based on your idea of fractions, but takes it to the logical end. With binary FP there is no need for many complex operations.



            • Turning decimal FP into binary is just a series of shift operations.

            • Returning it to decimal (*2) is again just shifting plus additions

            • Adding - or subracting - two numbers does only need a binary integer addition after shifting the lesser one to the right.

            • Multiplying - or dividing - means multiplication - or division - of a these two fixed point integers and addition of the exponent.

            All complex issues get reduced to fixed length integer operations. Not only the most simple form, but also exactly what binary CPUs can do best. And while that length can be taylored to the job (*3), already rather thrifty ones (size wise) with just 4 bytes storage need will cover most of everyday needs. And extending that to 5,6 or 8 gives a precicion rarely needed (*4).




            And probably we all remember how to add or subtract fractions from school.




            No, we don't realy, as that was something only mentioned for short time during third grade. Keep in mind most of the world already went (decimal) floating point more than a century ago.




            *1 - Or similar systems, like IBMs base 16 floating point used in the /360 series. Here the basic storage unit isn't a bit but a nibble, acknowledgeing that the memory is byte and parts of the machine nibble orientated.



            *2 - The least often done operation.



            *3 - Already 16 bit floating point can be useful for everyday issues. I even themember an application with a 8 bit float format used to scale priorities.



            *4 - Yes, there are be use cases where either more precission or a different system is needed for accurate/needed results, but their number is small and special - or already covered by integers:)






            share|improve this answer




















            • By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
              – Wilson
              1 hour ago










            • @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
              – Raffzahn
              1 hour ago















            up vote
            4
            down vote














            My question is, why not use fractions?




            Quick answer:



            • Too much code needed

            • dynamic storage needed

            • long representation even for simple numbers

            • Complex and slow execution

            And most prominent:



            • Because floating point is already based on fractions: Binary ones, the kind a binary computer handles best.


            Long Form:



            The mantissa of a floating point number is a sequence of fractions to the base of two:



            1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 ...


            Each holding a zero or one denoting if that fraction is present. So dislaying 3/8th gives the sequence 0110000...




            A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend.




            That 'whatever size' is eventually the most important argument against. An easy to implement system does need a representation as short as possible to save on memory usage - that's a premium, especialy oearly machines - and it should use fixed size units, so memory management can be as simple as possible.



            One byte each may not realy be good to represent the needed fractions, resulting in a rather complicated puzzle of normalisation, which to be handled needs a rather lage amount of divisions. A realy costly operation (Hard or Softwarewise). In addition to the storage problem for the numbers, this would require even more address space to hold the non trivial handling routines.



            Binary (*1) floating point is based on your idea of fractions, but takes it to the logical end. With binary FP there is no need for many complex operations.



            • Turning decimal FP into binary is just a series of shift operations.

            • Returning it to decimal (*2) is again just shifting plus additions

            • Adding - or subracting - two numbers does only need a binary integer addition after shifting the lesser one to the right.

            • Multiplying - or dividing - means multiplication - or division - of a these two fixed point integers and addition of the exponent.

            All complex issues get reduced to fixed length integer operations. Not only the most simple form, but also exactly what binary CPUs can do best. And while that length can be taylored to the job (*3), already rather thrifty ones (size wise) with just 4 bytes storage need will cover most of everyday needs. And extending that to 5,6 or 8 gives a precicion rarely needed (*4).




            And probably we all remember how to add or subtract fractions from school.




            No, we don't realy, as that was something only mentioned for short time during third grade. Keep in mind most of the world already went (decimal) floating point more than a century ago.




            *1 - Or similar systems, like IBMs base 16 floating point used in the /360 series. Here the basic storage unit isn't a bit but a nibble, acknowledgeing that the memory is byte and parts of the machine nibble orientated.



            *2 - The least often done operation.



            *3 - Already 16 bit floating point can be useful for everyday issues. I even themember an application with a 8 bit float format used to scale priorities.



            *4 - Yes, there are be use cases where either more precission or a different system is needed for accurate/needed results, but their number is small and special - or already covered by integers:)






            share|improve this answer




















            • By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
              – Wilson
              1 hour ago










            • @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
              – Raffzahn
              1 hour ago













            up vote
            4
            down vote










            up vote
            4
            down vote










            My question is, why not use fractions?




            Quick answer:



            • Too much code needed

            • dynamic storage needed

            • long representation even for simple numbers

            • Complex and slow execution

            And most prominent:



            • Because floating point is already based on fractions: Binary ones, the kind a binary computer handles best.


            Long Form:



            The mantissa of a floating point number is a sequence of fractions to the base of two:



            1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 ...


            Each holding a zero or one denoting if that fraction is present. So dislaying 3/8th gives the sequence 0110000...




            A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend.




            That 'whatever size' is eventually the most important argument against. An easy to implement system does need a representation as short as possible to save on memory usage - that's a premium, especialy oearly machines - and it should use fixed size units, so memory management can be as simple as possible.



            One byte each may not realy be good to represent the needed fractions, resulting in a rather complicated puzzle of normalisation, which to be handled needs a rather lage amount of divisions. A realy costly operation (Hard or Softwarewise). In addition to the storage problem for the numbers, this would require even more address space to hold the non trivial handling routines.



            Binary (*1) floating point is based on your idea of fractions, but takes it to the logical end. With binary FP there is no need for many complex operations.



            • Turning decimal FP into binary is just a series of shift operations.

            • Returning it to decimal (*2) is again just shifting plus additions

            • Adding - or subracting - two numbers does only need a binary integer addition after shifting the lesser one to the right.

            • Multiplying - or dividing - means multiplication - or division - of a these two fixed point integers and addition of the exponent.

            All complex issues get reduced to fixed length integer operations. Not only the most simple form, but also exactly what binary CPUs can do best. And while that length can be taylored to the job (*3), already rather thrifty ones (size wise) with just 4 bytes storage need will cover most of everyday needs. And extending that to 5,6 or 8 gives a precicion rarely needed (*4).




            And probably we all remember how to add or subtract fractions from school.




            No, we don't realy, as that was something only mentioned for short time during third grade. Keep in mind most of the world already went (decimal) floating point more than a century ago.




            *1 - Or similar systems, like IBMs base 16 floating point used in the /360 series. Here the basic storage unit isn't a bit but a nibble, acknowledgeing that the memory is byte and parts of the machine nibble orientated.



            *2 - The least often done operation.



            *3 - Already 16 bit floating point can be useful for everyday issues. I even themember an application with a 8 bit float format used to scale priorities.



            *4 - Yes, there are be use cases where either more precission or a different system is needed for accurate/needed results, but their number is small and special - or already covered by integers:)






            share|improve this answer













            My question is, why not use fractions?




            Quick answer:



            • Too much code needed

            • dynamic storage needed

            • long representation even for simple numbers

            • Complex and slow execution

            And most prominent:



            • Because floating point is already based on fractions: Binary ones, the kind a binary computer handles best.


            Long Form:



            The mantissa of a floating point number is a sequence of fractions to the base of two:



            1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 ...


            Each holding a zero or one denoting if that fraction is present. So dislaying 3/8th gives the sequence 0110000...




            A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend.




            That 'whatever size' is eventually the most important argument against. An easy to implement system does need a representation as short as possible to save on memory usage - that's a premium, especialy oearly machines - and it should use fixed size units, so memory management can be as simple as possible.



            One byte each may not realy be good to represent the needed fractions, resulting in a rather complicated puzzle of normalisation, which to be handled needs a rather lage amount of divisions. A realy costly operation (Hard or Softwarewise). In addition to the storage problem for the numbers, this would require even more address space to hold the non trivial handling routines.



            Binary (*1) floating point is based on your idea of fractions, but takes it to the logical end. With binary FP there is no need for many complex operations.



            • Turning decimal FP into binary is just a series of shift operations.

            • Returning it to decimal (*2) is again just shifting plus additions

            • Adding - or subracting - two numbers does only need a binary integer addition after shifting the lesser one to the right.

            • Multiplying - or dividing - means multiplication - or division - of a these two fixed point integers and addition of the exponent.

            All complex issues get reduced to fixed length integer operations. Not only the most simple form, but also exactly what binary CPUs can do best. And while that length can be taylored to the job (*3), already rather thrifty ones (size wise) with just 4 bytes storage need will cover most of everyday needs. And extending that to 5,6 or 8 gives a precicion rarely needed (*4).




            And probably we all remember how to add or subtract fractions from school.




            No, we don't realy, as that was something only mentioned for short time during third grade. Keep in mind most of the world already went (decimal) floating point more than a century ago.




            *1 - Or similar systems, like IBMs base 16 floating point used in the /360 series. Here the basic storage unit isn't a bit but a nibble, acknowledgeing that the memory is byte and parts of the machine nibble orientated.



            *2 - The least often done operation.



            *3 - Already 16 bit floating point can be useful for everyday issues. I even themember an application with a 8 bit float format used to scale priorities.



            *4 - Yes, there are be use cases where either more precission or a different system is needed for accurate/needed results, but their number is small and special - or already covered by integers:)







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 1 hour ago









            Raffzahn

            35.8k478142




            35.8k478142











            • By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
              – Wilson
              1 hour ago










            • @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
              – Raffzahn
              1 hour ago

















            • By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
              – Wilson
              1 hour ago










            • @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
              – Raffzahn
              1 hour ago
















            By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
            – Wilson
            1 hour ago




            By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part
            – Wilson
            1 hour ago












            @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
            – Raffzahn
            1 hour ago





            @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle.
            – Raffzahn
            1 hour ago











            up vote
            1
            down vote













            There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



            Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



            If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



            Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






            share|improve this answer




















            • I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
              – pipe
              1 min ago














            up vote
            1
            down vote













            There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



            Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



            If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



            Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






            share|improve this answer




















            • I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
              – pipe
              1 min ago












            up vote
            1
            down vote










            up vote
            1
            down vote









            There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



            Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



            If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



            Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






            share|improve this answer












            There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



            Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



            If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



            Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 2 hours ago









            alephzero

            58826




            58826











            • I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
              – pipe
              1 min ago
















            • I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
              – pipe
              1 min ago















            I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
            – pipe
            1 min ago




            I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind.
            – pipe
            1 min ago










            up vote
            0
            down vote













            dirkt and alephzero provided definitive general responses. Mine is focused on one element of your question, the structure used to store a fraction. You proposed something on the order of:



            struct mixedFrac 
            int whole;
            uchar dividend;
            uchar divisor;



            Note that this pre-limits general accuracy; there is only so much precision an 8-bit divisor can depended on to provide. On the one hand, it could be perfect; 78 2/3 could be represented with no loss, unlike with floating point. Or it could provide great (but not perfect) accuracy, such as pi at 3 16/113 (accurate to 6 decimal places). On the other hand, an infinite quantity of numbers would be far from accurate. Imagine trying to represent 1 1/1024. Within the structure, the closest you could get would be 1 0/255.



            The proposed structure could be easily simplified and improved with the use of a different structure:



            struct simpFrac 
            long dividend;
            ulong divisor;



            Since the divisor is now represented with much more precision, a much wider span of general accuracy is allowed. And now that approximation to pi can be shown in traditional fashion, 355/113.



            But as the others have pointed out, as you use these perfect to very accurate fractions, you lose precision quickly, plus the overhead of maintaining the "best" divisor for the result is quite costly, especially if a primary goal of using fractions was to keep things more efficient than floating point.






            share|improve this answer




















            • Woah there! Looks like you doubly posted your answer!
              – Wilson
              2 hours ago










            • @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
              – RichF
              1 hour ago















            up vote
            0
            down vote













            dirkt and alephzero provided definitive general responses. Mine is focused on one element of your question, the structure used to store a fraction. You proposed something on the order of:



            struct mixedFrac 
            int whole;
            uchar dividend;
            uchar divisor;



            Note that this pre-limits general accuracy; there is only so much precision an 8-bit divisor can depended on to provide. On the one hand, it could be perfect; 78 2/3 could be represented with no loss, unlike with floating point. Or it could provide great (but not perfect) accuracy, such as pi at 3 16/113 (accurate to 6 decimal places). On the other hand, an infinite quantity of numbers would be far from accurate. Imagine trying to represent 1 1/1024. Within the structure, the closest you could get would be 1 0/255.



            The proposed structure could be easily simplified and improved with the use of a different structure:



            struct simpFrac 
            long dividend;
            ulong divisor;



            Since the divisor is now represented with much more precision, a much wider span of general accuracy is allowed. And now that approximation to pi can be shown in traditional fashion, 355/113.



            But as the others have pointed out, as you use these perfect to very accurate fractions, you lose precision quickly, plus the overhead of maintaining the "best" divisor for the result is quite costly, especially if a primary goal of using fractions was to keep things more efficient than floating point.






            share|improve this answer




















            • Woah there! Looks like you doubly posted your answer!
              – Wilson
              2 hours ago










            • @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
              – RichF
              1 hour ago













            up vote
            0
            down vote










            up vote
            0
            down vote









            dirkt and alephzero provided definitive general responses. Mine is focused on one element of your question, the structure used to store a fraction. You proposed something on the order of:



            struct mixedFrac 
            int whole;
            uchar dividend;
            uchar divisor;



            Note that this pre-limits general accuracy; there is only so much precision an 8-bit divisor can depended on to provide. On the one hand, it could be perfect; 78 2/3 could be represented with no loss, unlike with floating point. Or it could provide great (but not perfect) accuracy, such as pi at 3 16/113 (accurate to 6 decimal places). On the other hand, an infinite quantity of numbers would be far from accurate. Imagine trying to represent 1 1/1024. Within the structure, the closest you could get would be 1 0/255.



            The proposed structure could be easily simplified and improved with the use of a different structure:



            struct simpFrac 
            long dividend;
            ulong divisor;



            Since the divisor is now represented with much more precision, a much wider span of general accuracy is allowed. And now that approximation to pi can be shown in traditional fashion, 355/113.



            But as the others have pointed out, as you use these perfect to very accurate fractions, you lose precision quickly, plus the overhead of maintaining the "best" divisor for the result is quite costly, especially if a primary goal of using fractions was to keep things more efficient than floating point.






            share|improve this answer












            dirkt and alephzero provided definitive general responses. Mine is focused on one element of your question, the structure used to store a fraction. You proposed something on the order of:



            struct mixedFrac 
            int whole;
            uchar dividend;
            uchar divisor;



            Note that this pre-limits general accuracy; there is only so much precision an 8-bit divisor can depended on to provide. On the one hand, it could be perfect; 78 2/3 could be represented with no loss, unlike with floating point. Or it could provide great (but not perfect) accuracy, such as pi at 3 16/113 (accurate to 6 decimal places). On the other hand, an infinite quantity of numbers would be far from accurate. Imagine trying to represent 1 1/1024. Within the structure, the closest you could get would be 1 0/255.



            The proposed structure could be easily simplified and improved with the use of a different structure:



            struct simpFrac 
            long dividend;
            ulong divisor;



            Since the divisor is now represented with much more precision, a much wider span of general accuracy is allowed. And now that approximation to pi can be shown in traditional fashion, 355/113.



            But as the others have pointed out, as you use these perfect to very accurate fractions, you lose precision quickly, plus the overhead of maintaining the "best" divisor for the result is quite costly, especially if a primary goal of using fractions was to keep things more efficient than floating point.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 2 hours ago









            RichF

            4,2191334




            4,2191334











            • Woah there! Looks like you doubly posted your answer!
              – Wilson
              2 hours ago










            • @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
              – RichF
              1 hour ago

















            • Woah there! Looks like you doubly posted your answer!
              – Wilson
              2 hours ago










            • @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
              – RichF
              1 hour ago
















            Woah there! Looks like you doubly posted your answer!
            – Wilson
            2 hours ago




            Woah there! Looks like you doubly posted your answer!
            – Wilson
            2 hours ago












            @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
            – RichF
            1 hour ago





            @Wilson That was weird. I made an edit, and ended up with the original plus an edited copy. I just deleted the original.
            – RichF
            1 hour ago











            up vote
            0
            down vote













            Point by point.



            • faster addition/subtraction

              No: 8/15 + 5/7 is evaluated as 131/105 [(8*7 + 15*5)/(7*15)], so 3 multiplications for one single addition/subtraction. Plus possibly reduction

            • easier to print

              No: you have to print a human readable string of digits. So you must transform 131/105 to 1.247... Or are you proposing to simply display the fraction? Not so useful for the user.
              PRINT 12/25 --> RESULT IS 12/25


            • less code

              No: floating point code is compact, it's just shifting all in all


            • easy to smoosh those values together into an array subscript (for sine tables etc)

              I don't understand what you mean. Floating point 32 or 64 bit values can be packed togeter easily


            • easier for laypeople to grok

              Irrelevant, laypoepole do not program the bios of microcomputers. And statistics tell us that most of laypeople do not understand fractions anyway


            • x - x == 0

              The same in floating point





            share








            New contributor




            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.

















            • Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
              – Wilson
              2 mins ago















            up vote
            0
            down vote













            Point by point.



            • faster addition/subtraction

              No: 8/15 + 5/7 is evaluated as 131/105 [(8*7 + 15*5)/(7*15)], so 3 multiplications for one single addition/subtraction. Plus possibly reduction

            • easier to print

              No: you have to print a human readable string of digits. So you must transform 131/105 to 1.247... Or are you proposing to simply display the fraction? Not so useful for the user.
              PRINT 12/25 --> RESULT IS 12/25


            • less code

              No: floating point code is compact, it's just shifting all in all


            • easy to smoosh those values together into an array subscript (for sine tables etc)

              I don't understand what you mean. Floating point 32 or 64 bit values can be packed togeter easily


            • easier for laypeople to grok

              Irrelevant, laypoepole do not program the bios of microcomputers. And statistics tell us that most of laypeople do not understand fractions anyway


            • x - x == 0

              The same in floating point





            share








            New contributor




            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.

















            • Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
              – Wilson
              2 mins ago













            up vote
            0
            down vote










            up vote
            0
            down vote









            Point by point.



            • faster addition/subtraction

              No: 8/15 + 5/7 is evaluated as 131/105 [(8*7 + 15*5)/(7*15)], so 3 multiplications for one single addition/subtraction. Plus possibly reduction

            • easier to print

              No: you have to print a human readable string of digits. So you must transform 131/105 to 1.247... Or are you proposing to simply display the fraction? Not so useful for the user.
              PRINT 12/25 --> RESULT IS 12/25


            • less code

              No: floating point code is compact, it's just shifting all in all


            • easy to smoosh those values together into an array subscript (for sine tables etc)

              I don't understand what you mean. Floating point 32 or 64 bit values can be packed togeter easily


            • easier for laypeople to grok

              Irrelevant, laypoepole do not program the bios of microcomputers. And statistics tell us that most of laypeople do not understand fractions anyway


            • x - x == 0

              The same in floating point





            share








            New contributor




            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            Point by point.



            • faster addition/subtraction

              No: 8/15 + 5/7 is evaluated as 131/105 [(8*7 + 15*5)/(7*15)], so 3 multiplications for one single addition/subtraction. Plus possibly reduction

            • easier to print

              No: you have to print a human readable string of digits. So you must transform 131/105 to 1.247... Or are you proposing to simply display the fraction? Not so useful for the user.
              PRINT 12/25 --> RESULT IS 12/25


            • less code

              No: floating point code is compact, it's just shifting all in all


            • easy to smoosh those values together into an array subscript (for sine tables etc)

              I don't understand what you mean. Floating point 32 or 64 bit values can be packed togeter easily


            • easier for laypeople to grok

              Irrelevant, laypoepole do not program the bios of microcomputers. And statistics tell us that most of laypeople do not understand fractions anyway


            • x - x == 0

              The same in floating point






            share








            New contributor




            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.








            share


            share






            New contributor




            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            answered 6 mins ago









            edc65

            1011




            1011




            New contributor




            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.





            New contributor





            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            edc65 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.











            • Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
              – Wilson
              2 mins ago

















            • Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
              – Wilson
              2 mins ago
















            Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
            – Wilson
            2 mins ago





            Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves?
            – Wilson
            2 mins ago


















             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7810%2fwhy-not-use-fractions-instead-of-floating-point%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What does second last employer means? [closed]

            List of Gilmore Girls characters

            Confectionery