Why not fractions?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.
My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.
I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:
- faster addition/subtraction
- easier to print
- less code
- easy to smoosh those values together into an array subscript (for sine tables etc)
- easier for laypeople to grok
x - x == 0
And the disadvantages:
- probably the range is not as large, but that's easy enough to overcome
- not as precise with very small numbers
- marketability?
It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?
software floating-point
add a comment |Â
up vote
1
down vote
favorite
Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.
My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.
I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:
- faster addition/subtraction
- easier to print
- less code
- easy to smoosh those values together into an array subscript (for sine tables etc)
- easier for laypeople to grok
x - x == 0
And the disadvantages:
- probably the range is not as large, but that's easy enough to overcome
- not as precise with very small numbers
- marketability?
It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?
software floating-point
Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago
@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran callsreal
".
– Wilson
1 hour ago
And of course lookup tables are still handy; Commodore BASIC uses one for its implementation ofsin(x)
andcos(x)
.
– Wilson
1 hour ago
Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.
My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.
I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:
- faster addition/subtraction
- easier to print
- less code
- easy to smoosh those values together into an array subscript (for sine tables etc)
- easier for laypeople to grok
x - x == 0
And the disadvantages:
- probably the range is not as large, but that's easy enough to overcome
- not as precise with very small numbers
- marketability?
It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?
software floating-point
Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.
My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.
I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:
- faster addition/subtraction
- easier to print
- less code
- easy to smoosh those values together into an array subscript (for sine tables etc)
- easier for laypeople to grok
x - x == 0
And the disadvantages:
- probably the range is not as large, but that's easy enough to overcome
- not as precise with very small numbers
- marketability?
It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?
software floating-point
software floating-point
edited 1 hour ago
asked 2 hours ago
Wilson
8,596437107
8,596437107
Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago
@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran callsreal
".
– Wilson
1 hour ago
And of course lookup tables are still handy; Commodore BASIC uses one for its implementation ofsin(x)
andcos(x)
.
– Wilson
1 hour ago
Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago
add a comment |Â
Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago
@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran callsreal
".
– Wilson
1 hour ago
And of course lookup tables are still handy; Commodore BASIC uses one for its implementation ofsin(x)
andcos(x)
.
– Wilson
1 hour ago
Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago
Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago
Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago
@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls
real
".– Wilson
1 hour ago
@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls
real
".– Wilson
1 hour ago
And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of
sin(x)
and cos(x)
.– Wilson
1 hour ago
And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of
sin(x)
and cos(x)
.– Wilson
1 hour ago
Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago
Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
5
down vote
When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.
Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.
Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.
So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.
add a comment |Â
up vote
0
down vote
There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.
Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.
If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.
Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.
Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.
Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.
So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.
add a comment |Â
up vote
5
down vote
When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.
Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.
Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.
So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.
add a comment |Â
up vote
5
down vote
up vote
5
down vote
When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.
Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.
Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.
So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.
When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.
Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.
Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.
So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.
answered 1 hour ago
dirkt
7,33311839
7,33311839
add a comment |Â
add a comment |Â
up vote
0
down vote
There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.
Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.
If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.
Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).
add a comment |Â
up vote
0
down vote
There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.
Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.
If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.
Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).
add a comment |Â
up vote
0
down vote
up vote
0
down vote
There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.
Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.
If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.
Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).
There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.
Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.
If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.
Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).
answered 28 mins ago
alephzero
56826
56826
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7810%2fwhy-not-fractions%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory.
– UncleBod
1 hour ago
@UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls
real
".– Wilson
1 hour ago
And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of
sin(x)
andcos(x)
.– Wilson
1 hour ago
Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two.
– tofro
1 hour ago