Can any constant-time operation be applied N times in O(log(N)) time?
Clash Royale CLAN TAG#URR8PPP
up vote
6
down vote
favorite
Suppose f
is a constant-time function. Is it always possible to apply it n
times in O(log(n))
time?
Example
If f
is not
, then, apply_not_n_times(n,bool) = if n % 2 == 0 then bool else !bool
, which is O(log(n))
, since it is linear in the number of bits. That means the answer is yes for that particular choice of f
. I wonder if it can be proven that, for any choice of f
, there is always a O(log(n))
implementation of apply_f_n_times(n,x)
.
algorithms complexity-theory
add a comment |Â
up vote
6
down vote
favorite
Suppose f
is a constant-time function. Is it always possible to apply it n
times in O(log(n))
time?
Example
If f
is not
, then, apply_not_n_times(n,bool) = if n % 2 == 0 then bool else !bool
, which is O(log(n))
, since it is linear in the number of bits. That means the answer is yes for that particular choice of f
. I wonder if it can be proven that, for any choice of f
, there is always a O(log(n))
implementation of apply_f_n_times(n,x)
.
algorithms complexity-theory
1
What doesapply_not_n_times
do? I'd assume it appliesnot
n
times. But it only applies it once.
– pdexter
Aug 16 at 6:36
@pdexter No, it is a function that does the same as ifnot
was appliedn
times, but does it in less time.
– kutschkem
Aug 16 at 12:00
2
I understand now that you want to apply the function repeatedlyn
times, feedings its output to the input of the next invocation. I think this could be clearer in the question.
– pdexter
Aug 16 at 12:08
(note. Ifn
is represented in binary, then it's possible to check its parity with only 1 read operation.)
– user92772
Aug 16 at 13:51
Of course not. Example: hash table lookup. If you can do it in O(log n) time...No, you just can't
– xuq01
Aug 16 at 16:37
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
Suppose f
is a constant-time function. Is it always possible to apply it n
times in O(log(n))
time?
Example
If f
is not
, then, apply_not_n_times(n,bool) = if n % 2 == 0 then bool else !bool
, which is O(log(n))
, since it is linear in the number of bits. That means the answer is yes for that particular choice of f
. I wonder if it can be proven that, for any choice of f
, there is always a O(log(n))
implementation of apply_f_n_times(n,x)
.
algorithms complexity-theory
Suppose f
is a constant-time function. Is it always possible to apply it n
times in O(log(n))
time?
Example
If f
is not
, then, apply_not_n_times(n,bool) = if n % 2 == 0 then bool else !bool
, which is O(log(n))
, since it is linear in the number of bits. That means the answer is yes for that particular choice of f
. I wonder if it can be proven that, for any choice of f
, there is always a O(log(n))
implementation of apply_f_n_times(n,x)
.
algorithms complexity-theory
asked Aug 16 at 6:30
MaiaVictor
1,8241225
1,8241225
1
What doesapply_not_n_times
do? I'd assume it appliesnot
n
times. But it only applies it once.
– pdexter
Aug 16 at 6:36
@pdexter No, it is a function that does the same as ifnot
was appliedn
times, but does it in less time.
– kutschkem
Aug 16 at 12:00
2
I understand now that you want to apply the function repeatedlyn
times, feedings its output to the input of the next invocation. I think this could be clearer in the question.
– pdexter
Aug 16 at 12:08
(note. Ifn
is represented in binary, then it's possible to check its parity with only 1 read operation.)
– user92772
Aug 16 at 13:51
Of course not. Example: hash table lookup. If you can do it in O(log n) time...No, you just can't
– xuq01
Aug 16 at 16:37
add a comment |Â
1
What doesapply_not_n_times
do? I'd assume it appliesnot
n
times. But it only applies it once.
– pdexter
Aug 16 at 6:36
@pdexter No, it is a function that does the same as ifnot
was appliedn
times, but does it in less time.
– kutschkem
Aug 16 at 12:00
2
I understand now that you want to apply the function repeatedlyn
times, feedings its output to the input of the next invocation. I think this could be clearer in the question.
– pdexter
Aug 16 at 12:08
(note. Ifn
is represented in binary, then it's possible to check its parity with only 1 read operation.)
– user92772
Aug 16 at 13:51
Of course not. Example: hash table lookup. If you can do it in O(log n) time...No, you just can't
– xuq01
Aug 16 at 16:37
1
1
What does
apply_not_n_times
do? I'd assume it applies not
n
times. But it only applies it once.– pdexter
Aug 16 at 6:36
What does
apply_not_n_times
do? I'd assume it applies not
n
times. But it only applies it once.– pdexter
Aug 16 at 6:36
@pdexter No, it is a function that does the same as if
not
was applied n
times, but does it in less time.– kutschkem
Aug 16 at 12:00
@pdexter No, it is a function that does the same as if
not
was applied n
times, but does it in less time.– kutschkem
Aug 16 at 12:00
2
2
I understand now that you want to apply the function repeatedly
n
times, feedings its output to the input of the next invocation. I think this could be clearer in the question.– pdexter
Aug 16 at 12:08
I understand now that you want to apply the function repeatedly
n
times, feedings its output to the input of the next invocation. I think this could be clearer in the question.– pdexter
Aug 16 at 12:08
(note. If
n
is represented in binary, then it's possible to check its parity with only 1 read operation.)– user92772
Aug 16 at 13:51
(note. If
n
is represented in binary, then it's possible to check its parity with only 1 read operation.)– user92772
Aug 16 at 13:51
Of course not. Example: hash table lookup. If you can do it in O(log n) time...No, you just can't
– xuq01
Aug 16 at 16:37
Of course not. Example: hash table lookup. If you can do it in O(log n) time...No, you just can't
– xuq01
Aug 16 at 16:37
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
5
down vote
Not necessarily. Let us use the most common (according to some people) model of computation: the single tape Turing machine where the output of an algorithm is simply what is left on the tape. Here is a counterexample by Bergi. The operation $m$ that moves the head one step forward and writes a fixed symbol will absolutely need $n$ steps if we want to repeat $m$ $n$ times.
On the other hand, in some models of computation, any constant-time operation can be applied $N$ times in $mathcalO(log(N))$ time and in $mathcal O(1)$ amortized time.
This section is devoted to prove the above partial positive answer.
For the sake of discussion/proof/disproof, let us make OP's question precise.
Let $f$ be a function that maps a set $S$ to $S$ itself. Define function $r(f,n)$ as applying $f$ repeatedly $n$ times. That is, for any $ninBbb N$, $r(f,n): Sto S$ is defined by the following.
$$ r(f,n) := begincases
f & text if n = 1,\
fcirc r(f,n-1) &text otherwise.\
endcases$$
Claim: Suppose we can compute $f$ in constant time and $f(S)$ contains finitely many values, then we can compute $r(f,n)$ in $mathcalO(log n)$ time and in $mathcal O(1) amortized time.
Proof of the claim: (The simple observation is that we are dealing with states whose cardinality is smaller than a constant.)
Without affecting the correctness of this proof, we will assume that $|S|$ is smaller than some constant; Otherwise, we can replace $S$ by $f(S)$ and $f$ by itself restriction to $f(S)$.
Let $SS$ be the set of functions from $S$ to $S$. Since $|S|$ is smaller than some constant, $|SS|$ is also smaller than some constant. So the sequence of functions $r(f,1), r(f,2), cdots$ will be eventually periodic since each term in the sequence is determined by the previous term in the same way, that is, composition of the previous term with $f$. So, there are constants $c_1, c_2$ such that $r(f,n+c_1) = r(f,n)$ for any $n>c_2$. Now let us just compute and store the two dimensional constant array $a[i,s]$ where $a(i,s)=r(f,i)(s)$ for $1leq ile c_2+c_1$ and $sin S$ in constant time. Given $N>c_2+c_1$, we will find the remainder $n_0$ of $N-c_2$ divided by $c_1$ in $mathcalO(log(N))$ time. Now we have
$$r(f,n)(s) = a(n_0+c_2,s)$$
Note that the only (possibly) non-constant operation needed for any $N$ is the one-time operation to get the remainder of $N$ divided by a constant. So once we have done some $Theta(log N)$ operations, the amortized time will become $mathcal O(1)$.
We have proved the claim.
In which models of computation?
Now we want to show that there exists some models of computation such that $f$ can only have finitely many values if we can compute function $f$ in constant time or, more precisely, in less than $c$ steps for some constant $c$ independent of the input, $f(S)$ can only have finitely many values.
One such model of computation is, as mentioned by David Richerby, the multi-tape TMs where the output must be written on a designated tape which must not have any part of the input. (I would venture to say this might be the most common model that fits most people's mental image of computation, where we have input at one side of a box and output at the other side of a box. However, whether this model is even common or not is not the focus of this post.) In this mode of computation, since any constant-time operation $f$ can only move the head on that designated tape at most $c$ steps forward and at most $c$ steps backward for some constant $c$, it can produce at most $|Sigma|^2c+1$ states, where $Sigma$ is the set of symbols used on that designated tape. That is, $f$ can only have finitely many values.
Because of the multitude of the models of computation, we will not and cannot list all those complying models of computation. Anyway, thanks to the claim we just proved, in those models of computations, we will have the partial positive answer stated in the second paragraph of this post.
Although we have shown that in some models of computations, we can apply a given constant-time operation $N$ times in $mathcal O(log(N))$ time, the constant term hidden in the $mathcal O$ notation might be impractically large. For example, for some endomorphism $f$ of a set of 1000 elements, the smallest $c_1=1516385558956728808659224171023365600$ (which is $g(1000)$ for Laudau's function g). For a cryptographic hash operation which can have about $2^256$ state, the smallest $c_1$ might have more than trillions of digits. Although that constant term can be reduced significantly, it remains impractically large. In that sense, that partial positive answer may not be what OP might have been hoping for at all. It does not lessen our confidence in those cryptographic hash operations in anyway.
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
 |Â
show 3 more comments
up vote
3
down vote
Absolutely not. For example, a cryptographic hash is a constant time operation, which is usually applie say 100,000 times - intentionally, to make it more time consuming for a hacker to check a single key. log 100,000 is not particularly large, so the cryptographic community would be up in arms if you found a O(log n) method.
1
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
7
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
2
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
Not necessarily. Let us use the most common (according to some people) model of computation: the single tape Turing machine where the output of an algorithm is simply what is left on the tape. Here is a counterexample by Bergi. The operation $m$ that moves the head one step forward and writes a fixed symbol will absolutely need $n$ steps if we want to repeat $m$ $n$ times.
On the other hand, in some models of computation, any constant-time operation can be applied $N$ times in $mathcalO(log(N))$ time and in $mathcal O(1)$ amortized time.
This section is devoted to prove the above partial positive answer.
For the sake of discussion/proof/disproof, let us make OP's question precise.
Let $f$ be a function that maps a set $S$ to $S$ itself. Define function $r(f,n)$ as applying $f$ repeatedly $n$ times. That is, for any $ninBbb N$, $r(f,n): Sto S$ is defined by the following.
$$ r(f,n) := begincases
f & text if n = 1,\
fcirc r(f,n-1) &text otherwise.\
endcases$$
Claim: Suppose we can compute $f$ in constant time and $f(S)$ contains finitely many values, then we can compute $r(f,n)$ in $mathcalO(log n)$ time and in $mathcal O(1) amortized time.
Proof of the claim: (The simple observation is that we are dealing with states whose cardinality is smaller than a constant.)
Without affecting the correctness of this proof, we will assume that $|S|$ is smaller than some constant; Otherwise, we can replace $S$ by $f(S)$ and $f$ by itself restriction to $f(S)$.
Let $SS$ be the set of functions from $S$ to $S$. Since $|S|$ is smaller than some constant, $|SS|$ is also smaller than some constant. So the sequence of functions $r(f,1), r(f,2), cdots$ will be eventually periodic since each term in the sequence is determined by the previous term in the same way, that is, composition of the previous term with $f$. So, there are constants $c_1, c_2$ such that $r(f,n+c_1) = r(f,n)$ for any $n>c_2$. Now let us just compute and store the two dimensional constant array $a[i,s]$ where $a(i,s)=r(f,i)(s)$ for $1leq ile c_2+c_1$ and $sin S$ in constant time. Given $N>c_2+c_1$, we will find the remainder $n_0$ of $N-c_2$ divided by $c_1$ in $mathcalO(log(N))$ time. Now we have
$$r(f,n)(s) = a(n_0+c_2,s)$$
Note that the only (possibly) non-constant operation needed for any $N$ is the one-time operation to get the remainder of $N$ divided by a constant. So once we have done some $Theta(log N)$ operations, the amortized time will become $mathcal O(1)$.
We have proved the claim.
In which models of computation?
Now we want to show that there exists some models of computation such that $f$ can only have finitely many values if we can compute function $f$ in constant time or, more precisely, in less than $c$ steps for some constant $c$ independent of the input, $f(S)$ can only have finitely many values.
One such model of computation is, as mentioned by David Richerby, the multi-tape TMs where the output must be written on a designated tape which must not have any part of the input. (I would venture to say this might be the most common model that fits most people's mental image of computation, where we have input at one side of a box and output at the other side of a box. However, whether this model is even common or not is not the focus of this post.) In this mode of computation, since any constant-time operation $f$ can only move the head on that designated tape at most $c$ steps forward and at most $c$ steps backward for some constant $c$, it can produce at most $|Sigma|^2c+1$ states, where $Sigma$ is the set of symbols used on that designated tape. That is, $f$ can only have finitely many values.
Because of the multitude of the models of computation, we will not and cannot list all those complying models of computation. Anyway, thanks to the claim we just proved, in those models of computations, we will have the partial positive answer stated in the second paragraph of this post.
Although we have shown that in some models of computations, we can apply a given constant-time operation $N$ times in $mathcal O(log(N))$ time, the constant term hidden in the $mathcal O$ notation might be impractically large. For example, for some endomorphism $f$ of a set of 1000 elements, the smallest $c_1=1516385558956728808659224171023365600$ (which is $g(1000)$ for Laudau's function g). For a cryptographic hash operation which can have about $2^256$ state, the smallest $c_1$ might have more than trillions of digits. Although that constant term can be reduced significantly, it remains impractically large. In that sense, that partial positive answer may not be what OP might have been hoping for at all. It does not lessen our confidence in those cryptographic hash operations in anyway.
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
 |Â
show 3 more comments
up vote
5
down vote
Not necessarily. Let us use the most common (according to some people) model of computation: the single tape Turing machine where the output of an algorithm is simply what is left on the tape. Here is a counterexample by Bergi. The operation $m$ that moves the head one step forward and writes a fixed symbol will absolutely need $n$ steps if we want to repeat $m$ $n$ times.
On the other hand, in some models of computation, any constant-time operation can be applied $N$ times in $mathcalO(log(N))$ time and in $mathcal O(1)$ amortized time.
This section is devoted to prove the above partial positive answer.
For the sake of discussion/proof/disproof, let us make OP's question precise.
Let $f$ be a function that maps a set $S$ to $S$ itself. Define function $r(f,n)$ as applying $f$ repeatedly $n$ times. That is, for any $ninBbb N$, $r(f,n): Sto S$ is defined by the following.
$$ r(f,n) := begincases
f & text if n = 1,\
fcirc r(f,n-1) &text otherwise.\
endcases$$
Claim: Suppose we can compute $f$ in constant time and $f(S)$ contains finitely many values, then we can compute $r(f,n)$ in $mathcalO(log n)$ time and in $mathcal O(1) amortized time.
Proof of the claim: (The simple observation is that we are dealing with states whose cardinality is smaller than a constant.)
Without affecting the correctness of this proof, we will assume that $|S|$ is smaller than some constant; Otherwise, we can replace $S$ by $f(S)$ and $f$ by itself restriction to $f(S)$.
Let $SS$ be the set of functions from $S$ to $S$. Since $|S|$ is smaller than some constant, $|SS|$ is also smaller than some constant. So the sequence of functions $r(f,1), r(f,2), cdots$ will be eventually periodic since each term in the sequence is determined by the previous term in the same way, that is, composition of the previous term with $f$. So, there are constants $c_1, c_2$ such that $r(f,n+c_1) = r(f,n)$ for any $n>c_2$. Now let us just compute and store the two dimensional constant array $a[i,s]$ where $a(i,s)=r(f,i)(s)$ for $1leq ile c_2+c_1$ and $sin S$ in constant time. Given $N>c_2+c_1$, we will find the remainder $n_0$ of $N-c_2$ divided by $c_1$ in $mathcalO(log(N))$ time. Now we have
$$r(f,n)(s) = a(n_0+c_2,s)$$
Note that the only (possibly) non-constant operation needed for any $N$ is the one-time operation to get the remainder of $N$ divided by a constant. So once we have done some $Theta(log N)$ operations, the amortized time will become $mathcal O(1)$.
We have proved the claim.
In which models of computation?
Now we want to show that there exists some models of computation such that $f$ can only have finitely many values if we can compute function $f$ in constant time or, more precisely, in less than $c$ steps for some constant $c$ independent of the input, $f(S)$ can only have finitely many values.
One such model of computation is, as mentioned by David Richerby, the multi-tape TMs where the output must be written on a designated tape which must not have any part of the input. (I would venture to say this might be the most common model that fits most people's mental image of computation, where we have input at one side of a box and output at the other side of a box. However, whether this model is even common or not is not the focus of this post.) In this mode of computation, since any constant-time operation $f$ can only move the head on that designated tape at most $c$ steps forward and at most $c$ steps backward for some constant $c$, it can produce at most $|Sigma|^2c+1$ states, where $Sigma$ is the set of symbols used on that designated tape. That is, $f$ can only have finitely many values.
Because of the multitude of the models of computation, we will not and cannot list all those complying models of computation. Anyway, thanks to the claim we just proved, in those models of computations, we will have the partial positive answer stated in the second paragraph of this post.
Although we have shown that in some models of computations, we can apply a given constant-time operation $N$ times in $mathcal O(log(N))$ time, the constant term hidden in the $mathcal O$ notation might be impractically large. For example, for some endomorphism $f$ of a set of 1000 elements, the smallest $c_1=1516385558956728808659224171023365600$ (which is $g(1000)$ for Laudau's function g). For a cryptographic hash operation which can have about $2^256$ state, the smallest $c_1$ might have more than trillions of digits. Although that constant term can be reduced significantly, it remains impractically large. In that sense, that partial positive answer may not be what OP might have been hoping for at all. It does not lessen our confidence in those cryptographic hash operations in anyway.
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
 |Â
show 3 more comments
up vote
5
down vote
up vote
5
down vote
Not necessarily. Let us use the most common (according to some people) model of computation: the single tape Turing machine where the output of an algorithm is simply what is left on the tape. Here is a counterexample by Bergi. The operation $m$ that moves the head one step forward and writes a fixed symbol will absolutely need $n$ steps if we want to repeat $m$ $n$ times.
On the other hand, in some models of computation, any constant-time operation can be applied $N$ times in $mathcalO(log(N))$ time and in $mathcal O(1)$ amortized time.
This section is devoted to prove the above partial positive answer.
For the sake of discussion/proof/disproof, let us make OP's question precise.
Let $f$ be a function that maps a set $S$ to $S$ itself. Define function $r(f,n)$ as applying $f$ repeatedly $n$ times. That is, for any $ninBbb N$, $r(f,n): Sto S$ is defined by the following.
$$ r(f,n) := begincases
f & text if n = 1,\
fcirc r(f,n-1) &text otherwise.\
endcases$$
Claim: Suppose we can compute $f$ in constant time and $f(S)$ contains finitely many values, then we can compute $r(f,n)$ in $mathcalO(log n)$ time and in $mathcal O(1) amortized time.
Proof of the claim: (The simple observation is that we are dealing with states whose cardinality is smaller than a constant.)
Without affecting the correctness of this proof, we will assume that $|S|$ is smaller than some constant; Otherwise, we can replace $S$ by $f(S)$ and $f$ by itself restriction to $f(S)$.
Let $SS$ be the set of functions from $S$ to $S$. Since $|S|$ is smaller than some constant, $|SS|$ is also smaller than some constant. So the sequence of functions $r(f,1), r(f,2), cdots$ will be eventually periodic since each term in the sequence is determined by the previous term in the same way, that is, composition of the previous term with $f$. So, there are constants $c_1, c_2$ such that $r(f,n+c_1) = r(f,n)$ for any $n>c_2$. Now let us just compute and store the two dimensional constant array $a[i,s]$ where $a(i,s)=r(f,i)(s)$ for $1leq ile c_2+c_1$ and $sin S$ in constant time. Given $N>c_2+c_1$, we will find the remainder $n_0$ of $N-c_2$ divided by $c_1$ in $mathcalO(log(N))$ time. Now we have
$$r(f,n)(s) = a(n_0+c_2,s)$$
Note that the only (possibly) non-constant operation needed for any $N$ is the one-time operation to get the remainder of $N$ divided by a constant. So once we have done some $Theta(log N)$ operations, the amortized time will become $mathcal O(1)$.
We have proved the claim.
In which models of computation?
Now we want to show that there exists some models of computation such that $f$ can only have finitely many values if we can compute function $f$ in constant time or, more precisely, in less than $c$ steps for some constant $c$ independent of the input, $f(S)$ can only have finitely many values.
One such model of computation is, as mentioned by David Richerby, the multi-tape TMs where the output must be written on a designated tape which must not have any part of the input. (I would venture to say this might be the most common model that fits most people's mental image of computation, where we have input at one side of a box and output at the other side of a box. However, whether this model is even common or not is not the focus of this post.) In this mode of computation, since any constant-time operation $f$ can only move the head on that designated tape at most $c$ steps forward and at most $c$ steps backward for some constant $c$, it can produce at most $|Sigma|^2c+1$ states, where $Sigma$ is the set of symbols used on that designated tape. That is, $f$ can only have finitely many values.
Because of the multitude of the models of computation, we will not and cannot list all those complying models of computation. Anyway, thanks to the claim we just proved, in those models of computations, we will have the partial positive answer stated in the second paragraph of this post.
Although we have shown that in some models of computations, we can apply a given constant-time operation $N$ times in $mathcal O(log(N))$ time, the constant term hidden in the $mathcal O$ notation might be impractically large. For example, for some endomorphism $f$ of a set of 1000 elements, the smallest $c_1=1516385558956728808659224171023365600$ (which is $g(1000)$ for Laudau's function g). For a cryptographic hash operation which can have about $2^256$ state, the smallest $c_1$ might have more than trillions of digits. Although that constant term can be reduced significantly, it remains impractically large. In that sense, that partial positive answer may not be what OP might have been hoping for at all. It does not lessen our confidence in those cryptographic hash operations in anyway.
Not necessarily. Let us use the most common (according to some people) model of computation: the single tape Turing machine where the output of an algorithm is simply what is left on the tape. Here is a counterexample by Bergi. The operation $m$ that moves the head one step forward and writes a fixed symbol will absolutely need $n$ steps if we want to repeat $m$ $n$ times.
On the other hand, in some models of computation, any constant-time operation can be applied $N$ times in $mathcalO(log(N))$ time and in $mathcal O(1)$ amortized time.
This section is devoted to prove the above partial positive answer.
For the sake of discussion/proof/disproof, let us make OP's question precise.
Let $f$ be a function that maps a set $S$ to $S$ itself. Define function $r(f,n)$ as applying $f$ repeatedly $n$ times. That is, for any $ninBbb N$, $r(f,n): Sto S$ is defined by the following.
$$ r(f,n) := begincases
f & text if n = 1,\
fcirc r(f,n-1) &text otherwise.\
endcases$$
Claim: Suppose we can compute $f$ in constant time and $f(S)$ contains finitely many values, then we can compute $r(f,n)$ in $mathcalO(log n)$ time and in $mathcal O(1) amortized time.
Proof of the claim: (The simple observation is that we are dealing with states whose cardinality is smaller than a constant.)
Without affecting the correctness of this proof, we will assume that $|S|$ is smaller than some constant; Otherwise, we can replace $S$ by $f(S)$ and $f$ by itself restriction to $f(S)$.
Let $SS$ be the set of functions from $S$ to $S$. Since $|S|$ is smaller than some constant, $|SS|$ is also smaller than some constant. So the sequence of functions $r(f,1), r(f,2), cdots$ will be eventually periodic since each term in the sequence is determined by the previous term in the same way, that is, composition of the previous term with $f$. So, there are constants $c_1, c_2$ such that $r(f,n+c_1) = r(f,n)$ for any $n>c_2$. Now let us just compute and store the two dimensional constant array $a[i,s]$ where $a(i,s)=r(f,i)(s)$ for $1leq ile c_2+c_1$ and $sin S$ in constant time. Given $N>c_2+c_1$, we will find the remainder $n_0$ of $N-c_2$ divided by $c_1$ in $mathcalO(log(N))$ time. Now we have
$$r(f,n)(s) = a(n_0+c_2,s)$$
Note that the only (possibly) non-constant operation needed for any $N$ is the one-time operation to get the remainder of $N$ divided by a constant. So once we have done some $Theta(log N)$ operations, the amortized time will become $mathcal O(1)$.
We have proved the claim.
In which models of computation?
Now we want to show that there exists some models of computation such that $f$ can only have finitely many values if we can compute function $f$ in constant time or, more precisely, in less than $c$ steps for some constant $c$ independent of the input, $f(S)$ can only have finitely many values.
One such model of computation is, as mentioned by David Richerby, the multi-tape TMs where the output must be written on a designated tape which must not have any part of the input. (I would venture to say this might be the most common model that fits most people's mental image of computation, where we have input at one side of a box and output at the other side of a box. However, whether this model is even common or not is not the focus of this post.) In this mode of computation, since any constant-time operation $f$ can only move the head on that designated tape at most $c$ steps forward and at most $c$ steps backward for some constant $c$, it can produce at most $|Sigma|^2c+1$ states, where $Sigma$ is the set of symbols used on that designated tape. That is, $f$ can only have finitely many values.
Because of the multitude of the models of computation, we will not and cannot list all those complying models of computation. Anyway, thanks to the claim we just proved, in those models of computations, we will have the partial positive answer stated in the second paragraph of this post.
Although we have shown that in some models of computations, we can apply a given constant-time operation $N$ times in $mathcal O(log(N))$ time, the constant term hidden in the $mathcal O$ notation might be impractically large. For example, for some endomorphism $f$ of a set of 1000 elements, the smallest $c_1=1516385558956728808659224171023365600$ (which is $g(1000)$ for Laudau's function g). For a cryptographic hash operation which can have about $2^256$ state, the smallest $c_1$ might have more than trillions of digits. Although that constant term can be reduced significantly, it remains impractically large. In that sense, that partial positive answer may not be what OP might have been hoping for at all. It does not lessen our confidence in those cryptographic hash operations in anyway.
edited Aug 31 at 7:30
answered Aug 16 at 11:42


Apass.Jack
1,356220
1,356220
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
 |Â
show 3 more comments
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
Let us continue this discussion in chat.
– David Richerby
Aug 16 at 14:24
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
I notice you've edited this. You seem to have addressed my objections to the initial version of the answer, so I've removed my comments and undone my downvote. Ironically, I think the counterexample you've added at the start is wrong. It computes the function "change the $m$th character to $x$" for some fixed $m$ and $x$. This function is idempotent so applying it $n$ times is the same as applying it once, and that takes constant time.
– David Richerby
Aug 31 at 16:53
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
@DavidRicherby, $m$ is the operation. $m$ is the function. $m$ is not a number. What $m$ does is "more forward one step and write one fixed symbol". It looks like you did not look carefully. Or should I just rephrase the counterexample as "The operation that moves the head one step forward and writes a fixed symbol will absolutely need n steps if we want to repeat that operation n times"?
– Apass.Jack
Aug 31 at 19:16
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
The question title says "operation" but the question itself is about functions. I agree that applying "move the head forwards" $N$ times must take time $Theta(N)$ but I don't think that's what the question is asking. As a function, "move the head one step forward and write $x$" just replaces the second character of the input with $x$: that function is, as I said, idempotent.
– David Richerby
Aug 31 at 20:42
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
@DavidRicherby I see. Had I included the position of the head as a part of the input as well as a part of the output, then the counterexample would have been valid. Right?
– Apass.Jack
Aug 31 at 21:14
 |Â
show 3 more comments
up vote
3
down vote
Absolutely not. For example, a cryptographic hash is a constant time operation, which is usually applie say 100,000 times - intentionally, to make it more time consuming for a hacker to check a single key. log 100,000 is not particularly large, so the cryptographic community would be up in arms if you found a O(log n) method.
1
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
7
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
2
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
add a comment |Â
up vote
3
down vote
Absolutely not. For example, a cryptographic hash is a constant time operation, which is usually applie say 100,000 times - intentionally, to make it more time consuming for a hacker to check a single key. log 100,000 is not particularly large, so the cryptographic community would be up in arms if you found a O(log n) method.
1
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
7
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
2
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Absolutely not. For example, a cryptographic hash is a constant time operation, which is usually applie say 100,000 times - intentionally, to make it more time consuming for a hacker to check a single key. log 100,000 is not particularly large, so the cryptographic community would be up in arms if you found a O(log n) method.
Absolutely not. For example, a cryptographic hash is a constant time operation, which is usually applie say 100,000 times - intentionally, to make it more time consuming for a hacker to check a single key. log 100,000 is not particularly large, so the cryptographic community would be up in arms if you found a O(log n) method.
answered Aug 16 at 8:03
gnasher729
9,1691015
9,1691015
1
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
7
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
2
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
add a comment |Â
1
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
7
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
2
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
1
1
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
I am afraid to say that you forget what does it mean by $O(log n)$. In fact, in your case, for any particular cryptographic algorithm in practice (that I know), the answer is apparently YES. It is only that the constant multiplier in the $O(log n)$ is impractically large (which is enjoyed by the cryptographic community). It is $O(log n)$ from the view of computer science. Please check my answer.
– Apass.Jack
Aug 16 at 11:43
7
7
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
This answer would be more consistent if you replace "Absolutely not" with "Most likely, no". Everyone assuming something is impossible can't be used as a proof.
– Dukeling
Aug 16 at 13:31
2
2
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
Note that all cryptographic hash functions (or at least, SHA256) has finite domain/codomain, so applying them a constant number of time is actually $O(1)$.
– user92772
Aug 16 at 13:56
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f96289%2fcan-any-constant-time-operation-be-applied-n-times-in-ologn-time%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
What does
apply_not_n_times
do? I'd assume it appliesnot
n
times. But it only applies it once.– pdexter
Aug 16 at 6:36
@pdexter No, it is a function that does the same as if
not
was appliedn
times, but does it in less time.– kutschkem
Aug 16 at 12:00
2
I understand now that you want to apply the function repeatedly
n
times, feedings its output to the input of the next invocation. I think this could be clearer in the question.– pdexter
Aug 16 at 12:08
(note. If
n
is represented in binary, then it's possible to check its parity with only 1 read operation.)– user92772
Aug 16 at 13:51
Of course not. Example: hash table lookup. If you can do it in O(log n) time...No, you just can't
– xuq01
Aug 16 at 16:37