Why not fractions?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?










share|improve this question























  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    1 hour ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    1 hour ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    1 hour ago










  • Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    1 hour ago














up vote
1
down vote

favorite












Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?










share|improve this question























  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    1 hour ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    1 hour ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    1 hour ago










  • Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    1 hour ago












up vote
1
down vote

favorite









up vote
1
down vote

favorite











Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?










share|improve this question















Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.



My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.



I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:



  • faster addition/subtraction

  • easier to print

  • less code

  • easy to smoosh those values together into an array subscript (for sine tables etc)

  • easier for laypeople to grok

  • x - x == 0

And the disadvantages:



  • probably the range is not as large, but that's easy enough to overcome

  • not as precise with very small numbers

  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?







software floating-point






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 1 hour ago

























asked 2 hours ago









Wilson

8,596437107




8,596437107











  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    1 hour ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    1 hour ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    1 hour ago










  • Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    1 hour ago
















  • Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
    – UncleBod
    1 hour ago










  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
    – Wilson
    1 hour ago










  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
    – Wilson
    1 hour ago










  • Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
    – tofro
    1 hour ago















Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago




Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago












@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
– Wilson
1 hour ago




@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real".
– Wilson
1 hour ago












And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
– Wilson
1 hour ago




And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x).
– Wilson
1 hour ago












Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago




Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago










2 Answers
2






active

oldest

votes

















up vote
5
down vote













When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






share|improve this answer



























    up vote
    0
    down vote













    There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



    Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



    If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



    Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






    share|improve this answer




















      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "648"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7810%2fwhy-not-fractions%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      5
      down vote













      When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



      Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



      Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



      So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






      share|improve this answer
























        up vote
        5
        down vote













        When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



        Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



        Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



        So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






        share|improve this answer






















          up vote
          5
          down vote










          up vote
          5
          down vote









          When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



          Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



          Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



          So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.






          share|improve this answer












          When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.



          Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.



          Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.



          So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 1 hour ago









          dirkt

          7,33311839




          7,33311839




















              up vote
              0
              down vote













              There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



              Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



              If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



              Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






              share|improve this answer
























                up vote
                0
                down vote













                There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



                Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



                If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



                Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






                share|improve this answer






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



                  Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



                  If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



                  Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).






                  share|improve this answer












                  There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.



                  Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.



                  If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.



                  Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 28 mins ago









                  alephzero

                  56826




                  56826



























                       

                      draft saved


                      draft discarded















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7810%2fwhy-not-fractions%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What does second last employer means? [closed]

                      List of Gilmore Girls characters

                      Confectionery