Logarithmic run time for calculating prime numbers?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Here's the function I'm currently analyzing. I know it's not the most optimal but I'm not understanding the $theta()$ of this algorithm. I've been told that it's not actually $theta(n)$ but instead a logarithmic run-time and it had something to do with the bits of the value passed in. I'm not understanding how the bits are causing the logarithmic so if someone can explain that it'd be great.
function isPrime(n):
1 if n = 2 return true
2 for i from 2 to n
3 if n % i = 0 return false
4 return true
runtime-analysis primes
New contributor
add a comment |Â
up vote
1
down vote
favorite
Here's the function I'm currently analyzing. I know it's not the most optimal but I'm not understanding the $theta()$ of this algorithm. I've been told that it's not actually $theta(n)$ but instead a logarithmic run-time and it had something to do with the bits of the value passed in. I'm not understanding how the bits are causing the logarithmic so if someone can explain that it'd be great.
function isPrime(n):
1 if n = 2 return true
2 for i from 2 to n
3 if n % i = 0 return false
4 return true
runtime-analysis primes
New contributor
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Here's the function I'm currently analyzing. I know it's not the most optimal but I'm not understanding the $theta()$ of this algorithm. I've been told that it's not actually $theta(n)$ but instead a logarithmic run-time and it had something to do with the bits of the value passed in. I'm not understanding how the bits are causing the logarithmic so if someone can explain that it'd be great.
function isPrime(n):
1 if n = 2 return true
2 for i from 2 to n
3 if n % i = 0 return false
4 return true
runtime-analysis primes
New contributor
Here's the function I'm currently analyzing. I know it's not the most optimal but I'm not understanding the $theta()$ of this algorithm. I've been told that it's not actually $theta(n)$ but instead a logarithmic run-time and it had something to do with the bits of the value passed in. I'm not understanding how the bits are causing the logarithmic so if someone can explain that it'd be great.
function isPrime(n):
1 if n = 2 return true
2 for i from 2 to n
3 if n % i = 0 return false
4 return true
runtime-analysis primes
runtime-analysis primes
New contributor
New contributor
edited 7 hours ago
David Richerby
62.3k1595179
62.3k1595179
New contributor
asked 8 hours ago
Justin Li
61
61
New contributor
New contributor
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
Let's use a different parameter for your input, $m$. The number of instructions that your algorithm executes in the worst case is $O(m)$.
We often measure the running time of algorithms in terms of the length of their input (measured either in bits or in words), often denoted by $n$. The length of your input is $n approx log_2 m$, since this is how long it takes to encode the integer $m$ in binary. Therefore your algorithm actually performs an exponential number of instructions $O(2^n)$.
A different issue is the cost of arithmetic operations, such as the modulo operation on line 3. In most models of computation (and in real life), the cost of such an operation depends on the length of the numbers being operated on. Therefore your algorithm runs in time which is worse than $2^n$ by a polynomial factor $n^O(1)$ (the exact polynomial factor depends on the algorithm you use to perform the division). Such a running time is usually denoted $tildeO(2^n)$.
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way ton
in binary
â Justin Li
3 hours ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
add a comment |Â
up vote
1
down vote
Well, first, the algorithm is incorrect: it claims that every integer less than or equal to $2$ is prime, and every integer greater thanâ$2$ is composite.
- Any $nleq1$ fails the test at line $1$, does zero iterations of the loope at lines $2$â$3$ and returns
true
at line $4$.
$n=2$ triggers thereturn true
on line $1$.- Any $n>2$ fails the test at line $1$. If it is composite, it returns
false
at line $3$ for its smallest prime factor; if it is prime, then the loop will get as far as $i=n$, at which point then%i=0
is true, so we returnfalse
at line $3$.
So, suppose we correct line $2$ to be for i=2 to n-1
and assume the input is greater than $1$. Then, for any prime input $n>2$, the loop at lines $2$â$3$ will run for each value $2, dots, n-1$, which is $n-2 = Theta(n)$ iterations. So your suspicion that the running time is $Theta(n)$ is correct: this algorithm is not logarithmic.
In general, as Yuval explains, we measure the running time of algorithms in terms of the length of the input as written in binary. Since the number $n$ takes $Theta(log n)$ bits to write down, and the running time of the algorithm is $Theta(n) = Theta(2^textinput length)$, this algorithm would normally be described as having exponential running time.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Let's use a different parameter for your input, $m$. The number of instructions that your algorithm executes in the worst case is $O(m)$.
We often measure the running time of algorithms in terms of the length of their input (measured either in bits or in words), often denoted by $n$. The length of your input is $n approx log_2 m$, since this is how long it takes to encode the integer $m$ in binary. Therefore your algorithm actually performs an exponential number of instructions $O(2^n)$.
A different issue is the cost of arithmetic operations, such as the modulo operation on line 3. In most models of computation (and in real life), the cost of such an operation depends on the length of the numbers being operated on. Therefore your algorithm runs in time which is worse than $2^n$ by a polynomial factor $n^O(1)$ (the exact polynomial factor depends on the algorithm you use to perform the division). Such a running time is usually denoted $tildeO(2^n)$.
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way ton
in binary
â Justin Li
3 hours ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
add a comment |Â
up vote
2
down vote
Let's use a different parameter for your input, $m$. The number of instructions that your algorithm executes in the worst case is $O(m)$.
We often measure the running time of algorithms in terms of the length of their input (measured either in bits or in words), often denoted by $n$. The length of your input is $n approx log_2 m$, since this is how long it takes to encode the integer $m$ in binary. Therefore your algorithm actually performs an exponential number of instructions $O(2^n)$.
A different issue is the cost of arithmetic operations, such as the modulo operation on line 3. In most models of computation (and in real life), the cost of such an operation depends on the length of the numbers being operated on. Therefore your algorithm runs in time which is worse than $2^n$ by a polynomial factor $n^O(1)$ (the exact polynomial factor depends on the algorithm you use to perform the division). Such a running time is usually denoted $tildeO(2^n)$.
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way ton
in binary
â Justin Li
3 hours ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Let's use a different parameter for your input, $m$. The number of instructions that your algorithm executes in the worst case is $O(m)$.
We often measure the running time of algorithms in terms of the length of their input (measured either in bits or in words), often denoted by $n$. The length of your input is $n approx log_2 m$, since this is how long it takes to encode the integer $m$ in binary. Therefore your algorithm actually performs an exponential number of instructions $O(2^n)$.
A different issue is the cost of arithmetic operations, such as the modulo operation on line 3. In most models of computation (and in real life), the cost of such an operation depends on the length of the numbers being operated on. Therefore your algorithm runs in time which is worse than $2^n$ by a polynomial factor $n^O(1)$ (the exact polynomial factor depends on the algorithm you use to perform the division). Such a running time is usually denoted $tildeO(2^n)$.
Let's use a different parameter for your input, $m$. The number of instructions that your algorithm executes in the worst case is $O(m)$.
We often measure the running time of algorithms in terms of the length of their input (measured either in bits or in words), often denoted by $n$. The length of your input is $n approx log_2 m$, since this is how long it takes to encode the integer $m$ in binary. Therefore your algorithm actually performs an exponential number of instructions $O(2^n)$.
A different issue is the cost of arithmetic operations, such as the modulo operation on line 3. In most models of computation (and in real life), the cost of such an operation depends on the length of the numbers being operated on. Therefore your algorithm runs in time which is worse than $2^n$ by a polynomial factor $n^O(1)$ (the exact polynomial factor depends on the algorithm you use to perform the division). Such a running time is usually denoted $tildeO(2^n)$.
answered 7 hours ago
Yuval Filmus
183k12171332
183k12171332
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way ton
in binary
â Justin Li
3 hours ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
add a comment |Â
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way ton
in binary
â Justin Li
3 hours ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way to
n
in binaryâ Justin Li
3 hours ago
So does O(m) have anything to do with the O(2^n) at all or are they just 2 ways of measuring the algorithm. And how does the encoding work exactly, I didn't think it was necessary to count from 0 all the way to
n
in binaryâ Justin Li
3 hours ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
These are two ways to measure the running time. As for the encoding, itâÂÂs really an implementation detail.
â Yuval Filmus
18 mins ago
add a comment |Â
up vote
1
down vote
Well, first, the algorithm is incorrect: it claims that every integer less than or equal to $2$ is prime, and every integer greater thanâ$2$ is composite.
- Any $nleq1$ fails the test at line $1$, does zero iterations of the loope at lines $2$â$3$ and returns
true
at line $4$.
$n=2$ triggers thereturn true
on line $1$.- Any $n>2$ fails the test at line $1$. If it is composite, it returns
false
at line $3$ for its smallest prime factor; if it is prime, then the loop will get as far as $i=n$, at which point then%i=0
is true, so we returnfalse
at line $3$.
So, suppose we correct line $2$ to be for i=2 to n-1
and assume the input is greater than $1$. Then, for any prime input $n>2$, the loop at lines $2$â$3$ will run for each value $2, dots, n-1$, which is $n-2 = Theta(n)$ iterations. So your suspicion that the running time is $Theta(n)$ is correct: this algorithm is not logarithmic.
In general, as Yuval explains, we measure the running time of algorithms in terms of the length of the input as written in binary. Since the number $n$ takes $Theta(log n)$ bits to write down, and the running time of the algorithm is $Theta(n) = Theta(2^textinput length)$, this algorithm would normally be described as having exponential running time.
add a comment |Â
up vote
1
down vote
Well, first, the algorithm is incorrect: it claims that every integer less than or equal to $2$ is prime, and every integer greater thanâ$2$ is composite.
- Any $nleq1$ fails the test at line $1$, does zero iterations of the loope at lines $2$â$3$ and returns
true
at line $4$.
$n=2$ triggers thereturn true
on line $1$.- Any $n>2$ fails the test at line $1$. If it is composite, it returns
false
at line $3$ for its smallest prime factor; if it is prime, then the loop will get as far as $i=n$, at which point then%i=0
is true, so we returnfalse
at line $3$.
So, suppose we correct line $2$ to be for i=2 to n-1
and assume the input is greater than $1$. Then, for any prime input $n>2$, the loop at lines $2$â$3$ will run for each value $2, dots, n-1$, which is $n-2 = Theta(n)$ iterations. So your suspicion that the running time is $Theta(n)$ is correct: this algorithm is not logarithmic.
In general, as Yuval explains, we measure the running time of algorithms in terms of the length of the input as written in binary. Since the number $n$ takes $Theta(log n)$ bits to write down, and the running time of the algorithm is $Theta(n) = Theta(2^textinput length)$, this algorithm would normally be described as having exponential running time.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Well, first, the algorithm is incorrect: it claims that every integer less than or equal to $2$ is prime, and every integer greater thanâ$2$ is composite.
- Any $nleq1$ fails the test at line $1$, does zero iterations of the loope at lines $2$â$3$ and returns
true
at line $4$.
$n=2$ triggers thereturn true
on line $1$.- Any $n>2$ fails the test at line $1$. If it is composite, it returns
false
at line $3$ for its smallest prime factor; if it is prime, then the loop will get as far as $i=n$, at which point then%i=0
is true, so we returnfalse
at line $3$.
So, suppose we correct line $2$ to be for i=2 to n-1
and assume the input is greater than $1$. Then, for any prime input $n>2$, the loop at lines $2$â$3$ will run for each value $2, dots, n-1$, which is $n-2 = Theta(n)$ iterations. So your suspicion that the running time is $Theta(n)$ is correct: this algorithm is not logarithmic.
In general, as Yuval explains, we measure the running time of algorithms in terms of the length of the input as written in binary. Since the number $n$ takes $Theta(log n)$ bits to write down, and the running time of the algorithm is $Theta(n) = Theta(2^textinput length)$, this algorithm would normally be described as having exponential running time.
Well, first, the algorithm is incorrect: it claims that every integer less than or equal to $2$ is prime, and every integer greater thanâ$2$ is composite.
- Any $nleq1$ fails the test at line $1$, does zero iterations of the loope at lines $2$â$3$ and returns
true
at line $4$.
$n=2$ triggers thereturn true
on line $1$.- Any $n>2$ fails the test at line $1$. If it is composite, it returns
false
at line $3$ for its smallest prime factor; if it is prime, then the loop will get as far as $i=n$, at which point then%i=0
is true, so we returnfalse
at line $3$.
So, suppose we correct line $2$ to be for i=2 to n-1
and assume the input is greater than $1$. Then, for any prime input $n>2$, the loop at lines $2$â$3$ will run for each value $2, dots, n-1$, which is $n-2 = Theta(n)$ iterations. So your suspicion that the running time is $Theta(n)$ is correct: this algorithm is not logarithmic.
In general, as Yuval explains, we measure the running time of algorithms in terms of the length of the input as written in binary. Since the number $n$ takes $Theta(log n)$ bits to write down, and the running time of the algorithm is $Theta(n) = Theta(2^textinput length)$, this algorithm would normally be described as having exponential running time.
answered 7 hours ago
David Richerby
62.3k1595179
62.3k1595179
add a comment |Â
add a comment |Â
Justin Li is a new contributor. Be nice, and check out our Code of Conduct.
Justin Li is a new contributor. Be nice, and check out our Code of Conduct.
Justin Li is a new contributor. Be nice, and check out our Code of Conduct.
Justin Li is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f98134%2flogarithmic-run-time-for-calculating-prime-numbers%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password