Some interesting observations on a sum of reciprocals
Clash Royale CLAN TAG#URR8PPP
up vote
9
down vote
favorite
This recent question is the motivation for this post.
Consider the following equation $$frac1x-1+frac1x-2+cdots+frac1x-k=frac1x-k-1$$ where $k>1$.
My claims:
There are $k$ solutions, all of which are real.
Let $x_min$ be the minimum value of these $k$ solutions. Then as $ktoinfty$, $x_min$ converges. (If it does, to what value does it converge?)
As $ktoinfty$, all of the solutions get closer and closer to an integer, which is bounded below. Furthermore, these integers will be $1, 2, 3, cdots, k-1, k+1$.
To see these patterns, I provide the solutions of $x$ below. I used W|A for $kge4$. The values in $colorbluetextblue$ are those of $x_min$.
$$beginarrayck&2&3&4&5&6\hline x&4.414&4.879&5.691&6.592&7.530\&colorblue1.585&2.652&3.686&4.701&5.722\&&colorblue1.468&2.545&3.588&4.615\&&&colorblue1.411&2.487&3.531\&&&&colorblue1.376&2.449\&&&&&colorblue1.352endarray$$
Also, when $k=2$, the polynomial in question is $x^2-6x+7$, and when $k=3$, it is $x^3-9x^2+24x-19$.
The reason why I think $x_min$ converges is because the difference between the current one and the previous gets smaller and smaller as $k$ increases.
Are my claims true?
polynomials convergence roots
 |Â
show 3 more comments
up vote
9
down vote
favorite
This recent question is the motivation for this post.
Consider the following equation $$frac1x-1+frac1x-2+cdots+frac1x-k=frac1x-k-1$$ where $k>1$.
My claims:
There are $k$ solutions, all of which are real.
Let $x_min$ be the minimum value of these $k$ solutions. Then as $ktoinfty$, $x_min$ converges. (If it does, to what value does it converge?)
As $ktoinfty$, all of the solutions get closer and closer to an integer, which is bounded below. Furthermore, these integers will be $1, 2, 3, cdots, k-1, k+1$.
To see these patterns, I provide the solutions of $x$ below. I used W|A for $kge4$. The values in $colorbluetextblue$ are those of $x_min$.
$$beginarrayck&2&3&4&5&6\hline x&4.414&4.879&5.691&6.592&7.530\&colorblue1.585&2.652&3.686&4.701&5.722\&&colorblue1.468&2.545&3.588&4.615\&&&colorblue1.411&2.487&3.531\&&&&colorblue1.376&2.449\&&&&&colorblue1.352endarray$$
Also, when $k=2$, the polynomial in question is $x^2-6x+7$, and when $k=3$, it is $x^3-9x^2+24x-19$.
The reason why I think $x_min$ converges is because the difference between the current one and the previous gets smaller and smaller as $k$ increases.
Are my claims true?
polynomials convergence roots
It is easy to prove that it has at least $k-1$ distinct real soutions
– Exodd
Aug 7 at 14:22
and also it is true that $1<x_min<2$ for every $k$ and it converges to 1
– Exodd
Aug 7 at 14:24
@Exodd: if you do that you know it has $k$ distinct solutions because of the limits of each side as $x to pm infty$
– Ross Millikan
Aug 7 at 14:27
The method I used (removing the asymptotes) has been the basis of significant improvements in chemical engineering calculations (in particular for the so-called Rachford-Rice and Underwood equations). I have published quite a lor of papers for these. I had a lot of fun with your (may I confess that I cannot resist an equation ?). Cheers.
– Claude Leibovici
Aug 8 at 8:05
I updated my answer for some improvements. Cheers and thanks for the problem.
– Claude Leibovici
Aug 10 at 8:05
 |Â
show 3 more comments
up vote
9
down vote
favorite
up vote
9
down vote
favorite
This recent question is the motivation for this post.
Consider the following equation $$frac1x-1+frac1x-2+cdots+frac1x-k=frac1x-k-1$$ where $k>1$.
My claims:
There are $k$ solutions, all of which are real.
Let $x_min$ be the minimum value of these $k$ solutions. Then as $ktoinfty$, $x_min$ converges. (If it does, to what value does it converge?)
As $ktoinfty$, all of the solutions get closer and closer to an integer, which is bounded below. Furthermore, these integers will be $1, 2, 3, cdots, k-1, k+1$.
To see these patterns, I provide the solutions of $x$ below. I used W|A for $kge4$. The values in $colorbluetextblue$ are those of $x_min$.
$$beginarrayck&2&3&4&5&6\hline x&4.414&4.879&5.691&6.592&7.530\&colorblue1.585&2.652&3.686&4.701&5.722\&&colorblue1.468&2.545&3.588&4.615\&&&colorblue1.411&2.487&3.531\&&&&colorblue1.376&2.449\&&&&&colorblue1.352endarray$$
Also, when $k=2$, the polynomial in question is $x^2-6x+7$, and when $k=3$, it is $x^3-9x^2+24x-19$.
The reason why I think $x_min$ converges is because the difference between the current one and the previous gets smaller and smaller as $k$ increases.
Are my claims true?
polynomials convergence roots
This recent question is the motivation for this post.
Consider the following equation $$frac1x-1+frac1x-2+cdots+frac1x-k=frac1x-k-1$$ where $k>1$.
My claims:
There are $k$ solutions, all of which are real.
Let $x_min$ be the minimum value of these $k$ solutions. Then as $ktoinfty$, $x_min$ converges. (If it does, to what value does it converge?)
As $ktoinfty$, all of the solutions get closer and closer to an integer, which is bounded below. Furthermore, these integers will be $1, 2, 3, cdots, k-1, k+1$.
To see these patterns, I provide the solutions of $x$ below. I used W|A for $kge4$. The values in $colorbluetextblue$ are those of $x_min$.
$$beginarrayck&2&3&4&5&6\hline x&4.414&4.879&5.691&6.592&7.530\&colorblue1.585&2.652&3.686&4.701&5.722\&&colorblue1.468&2.545&3.588&4.615\&&&colorblue1.411&2.487&3.531\&&&&colorblue1.376&2.449\&&&&&colorblue1.352endarray$$
Also, when $k=2$, the polynomial in question is $x^2-6x+7$, and when $k=3$, it is $x^3-9x^2+24x-19$.
The reason why I think $x_min$ converges is because the difference between the current one and the previous gets smaller and smaller as $k$ increases.
Are my claims true?
polynomials convergence roots
edited Aug 7 at 15:04
asked Aug 7 at 14:15
TheSimpliFire
10.5k62053
10.5k62053
It is easy to prove that it has at least $k-1$ distinct real soutions
– Exodd
Aug 7 at 14:22
and also it is true that $1<x_min<2$ for every $k$ and it converges to 1
– Exodd
Aug 7 at 14:24
@Exodd: if you do that you know it has $k$ distinct solutions because of the limits of each side as $x to pm infty$
– Ross Millikan
Aug 7 at 14:27
The method I used (removing the asymptotes) has been the basis of significant improvements in chemical engineering calculations (in particular for the so-called Rachford-Rice and Underwood equations). I have published quite a lor of papers for these. I had a lot of fun with your (may I confess that I cannot resist an equation ?). Cheers.
– Claude Leibovici
Aug 8 at 8:05
I updated my answer for some improvements. Cheers and thanks for the problem.
– Claude Leibovici
Aug 10 at 8:05
 |Â
show 3 more comments
It is easy to prove that it has at least $k-1$ distinct real soutions
– Exodd
Aug 7 at 14:22
and also it is true that $1<x_min<2$ for every $k$ and it converges to 1
– Exodd
Aug 7 at 14:24
@Exodd: if you do that you know it has $k$ distinct solutions because of the limits of each side as $x to pm infty$
– Ross Millikan
Aug 7 at 14:27
The method I used (removing the asymptotes) has been the basis of significant improvements in chemical engineering calculations (in particular for the so-called Rachford-Rice and Underwood equations). I have published quite a lor of papers for these. I had a lot of fun with your (may I confess that I cannot resist an equation ?). Cheers.
– Claude Leibovici
Aug 8 at 8:05
I updated my answer for some improvements. Cheers and thanks for the problem.
– Claude Leibovici
Aug 10 at 8:05
It is easy to prove that it has at least $k-1$ distinct real soutions
– Exodd
Aug 7 at 14:22
It is easy to prove that it has at least $k-1$ distinct real soutions
– Exodd
Aug 7 at 14:22
and also it is true that $1<x_min<2$ for every $k$ and it converges to 1
– Exodd
Aug 7 at 14:24
and also it is true that $1<x_min<2$ for every $k$ and it converges to 1
– Exodd
Aug 7 at 14:24
@Exodd: if you do that you know it has $k$ distinct solutions because of the limits of each side as $x to pm infty$
– Ross Millikan
Aug 7 at 14:27
@Exodd: if you do that you know it has $k$ distinct solutions because of the limits of each side as $x to pm infty$
– Ross Millikan
Aug 7 at 14:27
The method I used (removing the asymptotes) has been the basis of significant improvements in chemical engineering calculations (in particular for the so-called Rachford-Rice and Underwood equations). I have published quite a lor of papers for these. I had a lot of fun with your (may I confess that I cannot resist an equation ?). Cheers.
– Claude Leibovici
Aug 8 at 8:05
The method I used (removing the asymptotes) has been the basis of significant improvements in chemical engineering calculations (in particular for the so-called Rachford-Rice and Underwood equations). I have published quite a lor of papers for these. I had a lot of fun with your (may I confess that I cannot resist an equation ?). Cheers.
– Claude Leibovici
Aug 8 at 8:05
I updated my answer for some improvements. Cheers and thanks for the problem.
– Claude Leibovici
Aug 10 at 8:05
I updated my answer for some improvements. Cheers and thanks for the problem.
– Claude Leibovici
Aug 10 at 8:05
 |Â
show 3 more comments
3 Answers
3
active
oldest
votes
up vote
11
down vote
accepted
Both your claims are true.
if you call
$$
f(x) = frac1x-1+frac1x-2+cdots+frac1x-k-frac1x-k-1
$$
then $f(1^+) = +infty$, $f(2^-) = -infty$ and $f$ is continuous in $(1,2)$, so it has a root in $(1,2)$.
The same you can say about $(2,3)$, $(3,4), cdots, (k-1,k)$, so there are at least $k-1$ real distinct roots. $f$ is also equivalent to a $k$-degree polynomial with the same root, but a $k$-degree polynomial with $k-1$ real roots has in reality $k$ real roots.
The last root lies in $(k+1,+infty)$, since $f(k+1^+) = -infty$ and $f(+infty) = +infty$.
The least root $x_min$ must lie in $(1,2)$, since $f(x)<0$ for every $x<1$. Moreover,
$$
f(x) = 0implies x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
$$
and knowing $1<x<2$, we infer $frac1x-k-1>frac1x-2$ and
$$
1<x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
< 1 - frac1frac1x-3+cdots+frac1x-kto 1
$$
so $x_min$ converges to $1$
About the third claim, notice that you may repeat the same argument for any root except the biggest. Let us say that $x_r$ is the $r-th$ root, with $r<k$, and we know that $r<x_r<r+1$.
$$
f(x_r) = 0implies x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
$$
but $frac1x_r-k-1>frac1x_r-1$ holds, so
$$
r<x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
< r - frac1frac1x_r-2+cdots+frac1x_r-kto r
$$
so $x_r$ converges to $r$.
For the biggest root, we know $k+1<x_k$ and
$$
f(x_k) = 0implies k+1 < x_k = k+1 + frac1frac1x_k-1+cdots+frac1x_k-k to k+1
$$
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
 |Â
show 6 more comments
up vote
5
down vote
For the base case,
$$tag1f_2(x)=frac1x-1+frac1x-2-frac1x-3, $$
one readily verifies that there is a root in $(1,2)$ and a root $x^*$ in $(3,+infty)$.
If we multiply out the denominators of
$$f_k(x)=frac1x-1+frac1x-2+ldots+frac1x-k-frac1x-k-1,$$
we obtain the equation
$$tag2(x-1)(x-2)cdots(x-k-1)f_k(x)=0,$$
which is a polynomial of degree (at most) $k$, so we expect $k$ solutions, but some of these may be complex or repeated or happen to be among $1,2,ldots, k+1$ and thus not allowed for the original equation.
But $f_k(x)$
has simple poles with jumps from $-infty$ to $+infty$ at $1,2,3,ldots, k$, and a simple pole with jump from $+infty$ to $-infty$ at $k+1$, and is continuous otherwise. It follows that there is (at least) one real root in $(1,2)$, at least one in in $(2,3)$, etc. up to $(k-1,k)$, so there are at least $k-1$ distinct real roots.
Additionally, for $x>k+1$ and $kge2$, we have
$$f_k(x)ge f_2(x+k-2).$$
It follows that there is another real root between $k+1$ and $x^*+k-2$.
So indeed, we have $k$ distinct real roots.
From the aboive, the smallest root is always in $(1,2)$.
If follows from $f_k+1(x)>f_k(x)$ for $xin(1,2)$ and the fact that all $f_k$ are strictly decreasing there, that $x_min $ decreases with increasing $k$. As a decreasing bounded sequence, it does have a limit.
add a comment |Â
up vote
2
down vote
Considering that you look for the first zero of function
$$f(x)=sum_i=1^k frac 1x-i-frac1 x-k-1$$ which can write, using harmonic numbers,
$$f(x)=H_-x-H_k-x-frac1x-k-1$$ remove the asymptotes using
$$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)left(H_2-x-H_k-x-frac1x-k-1 right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get
$$g(x)=-1+(x-1) left(-frac1k+psi ^(0)(k)+gamma
+1right)+Oleft((x-1)^2right)$$ Ignoring the higher order terms, this gives as an approximation
$$x_est=1+frackkleft(gamma +1+ psi ^(0)(k)right)-1$$ which seems to be "decent" (and, for sure, confirms your claims).
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.66667 & 1.58579 \
3 & 1.46154 & 1.46791 \
4 & 1.38710 & 1.41082 \
5 & 1.34682 & 1.37605 \
6 & 1.32086 & 1.35209 \
7 & 1.30238 & 1.33430 \
8 & 1.28836 & 1.32040 \
9 & 1.27726 & 1.30914 \
10 & 1.26817 & 1.29976 \
11 & 1.26055 & 1.29179 \
12 & 1.25403 & 1.28489 \
13 & 1.24837 & 1.27884 \
14 & 1.24339 & 1.27347 \
15 & 1.23895 & 1.26867 \
16 & 1.23498 & 1.26433 \
17 & 1.23138 & 1.26039 \
18 & 1.22810 & 1.25678 \
19 & 1.22510 & 1.25346 \
20 & 1.22233 & 1.25039
endarray
right)$$ For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac1log left(kright)+gamma +1$$
For $k=1000$, the exact solution is $1.12955$ while the first approximation gives $1.11788$ and the second $1.11786$.
Using such estimates would make Newton method converging quite fast (shown below for $k=1000$).
$$left(
beginarraycc
n & x_n \
0 & 1.117855442 \
1 & 1.129429575 \
2 & 1.129545489 \
3 & 1.129545500
endarray
right)$$
Edit
We can obtain much better approximations if, instead of using a Taylor expansion of $g(x)$ to $Oleft((x-1)^2right)$, we build the simplest $[1,1]$ Padé approximant (which is equivalent to an $Oleft((x-1)^3right)$ Taylor expansion). This would lead to
$$x=1+ frac6 (k+k (psi ^(0)(k)+gamma )-1)pi ^2 k+6 (k+gamma (gamma
k+k-2)-1)-6 k psi ^(1)(k)+6 psi ^(0)(k) (2 gamma k+k+k psi
^(0)(k)-2)$$ Repeating the same calculations as above, the results are
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.60000 & 1.58579 \
3 & 1.46429 & 1.46791 \
4 & 1.40435 & 1.41082 \
5 & 1.36900 & 1.37605 \
6 & 1.34504 & 1.35209 \
7 & 1.32741 & 1.33430 \
8 & 1.31371 & 1.32040 \
9 & 1.30266 & 1.30914 \
10 & 1.29348 & 1.29976 \
11 & 1.28569 & 1.29179 \
12 & 1.27897 & 1.28489 \
13 & 1.27308 & 1.27884 \
14 & 1.26787 & 1.27347 \
15 & 1.26320 & 1.26867 \
16 & 1.25899 & 1.26433 \
17 & 1.25516 & 1.26039 \
18 & 1.25166 & 1.25678 \
19 & 1.24844 & 1.25346 \
20 & 1.24547 & 1.25039
endarray
right)$$
For $k=1000$, this would give as an estimate $1.12829$ for an exact value of $1.12955$.
For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac6 (log (k)+gamma +1)6 log (k) (log (k)+2 gamma +1)+pi ^2+6 gamma
(1+gamma )+6$$
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
11
down vote
accepted
Both your claims are true.
if you call
$$
f(x) = frac1x-1+frac1x-2+cdots+frac1x-k-frac1x-k-1
$$
then $f(1^+) = +infty$, $f(2^-) = -infty$ and $f$ is continuous in $(1,2)$, so it has a root in $(1,2)$.
The same you can say about $(2,3)$, $(3,4), cdots, (k-1,k)$, so there are at least $k-1$ real distinct roots. $f$ is also equivalent to a $k$-degree polynomial with the same root, but a $k$-degree polynomial with $k-1$ real roots has in reality $k$ real roots.
The last root lies in $(k+1,+infty)$, since $f(k+1^+) = -infty$ and $f(+infty) = +infty$.
The least root $x_min$ must lie in $(1,2)$, since $f(x)<0$ for every $x<1$. Moreover,
$$
f(x) = 0implies x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
$$
and knowing $1<x<2$, we infer $frac1x-k-1>frac1x-2$ and
$$
1<x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
< 1 - frac1frac1x-3+cdots+frac1x-kto 1
$$
so $x_min$ converges to $1$
About the third claim, notice that you may repeat the same argument for any root except the biggest. Let us say that $x_r$ is the $r-th$ root, with $r<k$, and we know that $r<x_r<r+1$.
$$
f(x_r) = 0implies x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
$$
but $frac1x_r-k-1>frac1x_r-1$ holds, so
$$
r<x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
< r - frac1frac1x_r-2+cdots+frac1x_r-kto r
$$
so $x_r$ converges to $r$.
For the biggest root, we know $k+1<x_k$ and
$$
f(x_k) = 0implies k+1 < x_k = k+1 + frac1frac1x_k-1+cdots+frac1x_k-k to k+1
$$
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
 |Â
show 6 more comments
up vote
11
down vote
accepted
Both your claims are true.
if you call
$$
f(x) = frac1x-1+frac1x-2+cdots+frac1x-k-frac1x-k-1
$$
then $f(1^+) = +infty$, $f(2^-) = -infty$ and $f$ is continuous in $(1,2)$, so it has a root in $(1,2)$.
The same you can say about $(2,3)$, $(3,4), cdots, (k-1,k)$, so there are at least $k-1$ real distinct roots. $f$ is also equivalent to a $k$-degree polynomial with the same root, but a $k$-degree polynomial with $k-1$ real roots has in reality $k$ real roots.
The last root lies in $(k+1,+infty)$, since $f(k+1^+) = -infty$ and $f(+infty) = +infty$.
The least root $x_min$ must lie in $(1,2)$, since $f(x)<0$ for every $x<1$. Moreover,
$$
f(x) = 0implies x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
$$
and knowing $1<x<2$, we infer $frac1x-k-1>frac1x-2$ and
$$
1<x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
< 1 - frac1frac1x-3+cdots+frac1x-kto 1
$$
so $x_min$ converges to $1$
About the third claim, notice that you may repeat the same argument for any root except the biggest. Let us say that $x_r$ is the $r-th$ root, with $r<k$, and we know that $r<x_r<r+1$.
$$
f(x_r) = 0implies x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
$$
but $frac1x_r-k-1>frac1x_r-1$ holds, so
$$
r<x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
< r - frac1frac1x_r-2+cdots+frac1x_r-kto r
$$
so $x_r$ converges to $r$.
For the biggest root, we know $k+1<x_k$ and
$$
f(x_k) = 0implies k+1 < x_k = k+1 + frac1frac1x_k-1+cdots+frac1x_k-k to k+1
$$
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
 |Â
show 6 more comments
up vote
11
down vote
accepted
up vote
11
down vote
accepted
Both your claims are true.
if you call
$$
f(x) = frac1x-1+frac1x-2+cdots+frac1x-k-frac1x-k-1
$$
then $f(1^+) = +infty$, $f(2^-) = -infty$ and $f$ is continuous in $(1,2)$, so it has a root in $(1,2)$.
The same you can say about $(2,3)$, $(3,4), cdots, (k-1,k)$, so there are at least $k-1$ real distinct roots. $f$ is also equivalent to a $k$-degree polynomial with the same root, but a $k$-degree polynomial with $k-1$ real roots has in reality $k$ real roots.
The last root lies in $(k+1,+infty)$, since $f(k+1^+) = -infty$ and $f(+infty) = +infty$.
The least root $x_min$ must lie in $(1,2)$, since $f(x)<0$ for every $x<1$. Moreover,
$$
f(x) = 0implies x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
$$
and knowing $1<x<2$, we infer $frac1x-k-1>frac1x-2$ and
$$
1<x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
< 1 - frac1frac1x-3+cdots+frac1x-kto 1
$$
so $x_min$ converges to $1$
About the third claim, notice that you may repeat the same argument for any root except the biggest. Let us say that $x_r$ is the $r-th$ root, with $r<k$, and we know that $r<x_r<r+1$.
$$
f(x_r) = 0implies x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
$$
but $frac1x_r-k-1>frac1x_r-1$ holds, so
$$
r<x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
< r - frac1frac1x_r-2+cdots+frac1x_r-kto r
$$
so $x_r$ converges to $r$.
For the biggest root, we know $k+1<x_k$ and
$$
f(x_k) = 0implies k+1 < x_k = k+1 + frac1frac1x_k-1+cdots+frac1x_k-k to k+1
$$
Both your claims are true.
if you call
$$
f(x) = frac1x-1+frac1x-2+cdots+frac1x-k-frac1x-k-1
$$
then $f(1^+) = +infty$, $f(2^-) = -infty$ and $f$ is continuous in $(1,2)$, so it has a root in $(1,2)$.
The same you can say about $(2,3)$, $(3,4), cdots, (k-1,k)$, so there are at least $k-1$ real distinct roots. $f$ is also equivalent to a $k$-degree polynomial with the same root, but a $k$-degree polynomial with $k-1$ real roots has in reality $k$ real roots.
The last root lies in $(k+1,+infty)$, since $f(k+1^+) = -infty$ and $f(+infty) = +infty$.
The least root $x_min$ must lie in $(1,2)$, since $f(x)<0$ for every $x<1$. Moreover,
$$
f(x) = 0implies x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
$$
and knowing $1<x<2$, we infer $frac1x-k-1>frac1x-2$ and
$$
1<x = 1 + frac1frac1x-k-1-frac1x-2-cdots-frac1x-k
< 1 - frac1frac1x-3+cdots+frac1x-kto 1
$$
so $x_min$ converges to $1$
About the third claim, notice that you may repeat the same argument for any root except the biggest. Let us say that $x_r$ is the $r-th$ root, with $r<k$, and we know that $r<x_r<r+1$.
$$
f(x_r) = 0implies x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
$$
but $frac1x_r-k-1>frac1x_r-1$ holds, so
$$
r<x_r = r + frac1frac1x_r-k-1-frac1x_r-1-cdots-frac1x_r-k
< r - frac1frac1x_r-2+cdots+frac1x_r-kto r
$$
so $x_r$ converges to $r$.
For the biggest root, we know $k+1<x_k$ and
$$
f(x_k) = 0implies k+1 < x_k = k+1 + frac1frac1x_k-1+cdots+frac1x_k-k to k+1
$$
edited Aug 7 at 16:46
TheSimpliFire
10.5k62053
10.5k62053
answered Aug 7 at 14:48


Exodd
5,4751222
5,4751222
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
 |Â
show 6 more comments
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I would've argued that the LHS is continuous, monotone, and ranges from $-infty$ to $+infty$ between natural numbers, while the RHS is continuous and monotone, and since the LHS is asymptotically larger than the RHS as $xtoinfty$, there must be a solution $x>k+1$, where the LHS is less than the RHS. In any case, sound argument.
– Simply Beautiful Art
Aug 7 at 14:52
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
I think they're quite equivalent. Now I'm wondering if $x_max$ converges to $k+1$
– Exodd
Aug 7 at 14:57
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
@Exodd Nice proofs. The third claim of converging to integers extends the claim $x_min$ to all solutions of $x$. I strongly believe that it's true after plotting in Desmos, but I can't construct a rigorous proof for it.
– TheSimpliFire
Aug 7 at 14:59
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Hard to exactly say what that's supposed to mean, $k+1$ is a moving target :P
– Simply Beautiful Art
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
Let's say $x_max(k)-k-1 = o_k(1)$
– Exodd
Aug 7 at 15:00
 |Â
show 6 more comments
up vote
5
down vote
For the base case,
$$tag1f_2(x)=frac1x-1+frac1x-2-frac1x-3, $$
one readily verifies that there is a root in $(1,2)$ and a root $x^*$ in $(3,+infty)$.
If we multiply out the denominators of
$$f_k(x)=frac1x-1+frac1x-2+ldots+frac1x-k-frac1x-k-1,$$
we obtain the equation
$$tag2(x-1)(x-2)cdots(x-k-1)f_k(x)=0,$$
which is a polynomial of degree (at most) $k$, so we expect $k$ solutions, but some of these may be complex or repeated or happen to be among $1,2,ldots, k+1$ and thus not allowed for the original equation.
But $f_k(x)$
has simple poles with jumps from $-infty$ to $+infty$ at $1,2,3,ldots, k$, and a simple pole with jump from $+infty$ to $-infty$ at $k+1$, and is continuous otherwise. It follows that there is (at least) one real root in $(1,2)$, at least one in in $(2,3)$, etc. up to $(k-1,k)$, so there are at least $k-1$ distinct real roots.
Additionally, for $x>k+1$ and $kge2$, we have
$$f_k(x)ge f_2(x+k-2).$$
It follows that there is another real root between $k+1$ and $x^*+k-2$.
So indeed, we have $k$ distinct real roots.
From the aboive, the smallest root is always in $(1,2)$.
If follows from $f_k+1(x)>f_k(x)$ for $xin(1,2)$ and the fact that all $f_k$ are strictly decreasing there, that $x_min $ decreases with increasing $k$. As a decreasing bounded sequence, it does have a limit.
add a comment |Â
up vote
5
down vote
For the base case,
$$tag1f_2(x)=frac1x-1+frac1x-2-frac1x-3, $$
one readily verifies that there is a root in $(1,2)$ and a root $x^*$ in $(3,+infty)$.
If we multiply out the denominators of
$$f_k(x)=frac1x-1+frac1x-2+ldots+frac1x-k-frac1x-k-1,$$
we obtain the equation
$$tag2(x-1)(x-2)cdots(x-k-1)f_k(x)=0,$$
which is a polynomial of degree (at most) $k$, so we expect $k$ solutions, but some of these may be complex or repeated or happen to be among $1,2,ldots, k+1$ and thus not allowed for the original equation.
But $f_k(x)$
has simple poles with jumps from $-infty$ to $+infty$ at $1,2,3,ldots, k$, and a simple pole with jump from $+infty$ to $-infty$ at $k+1$, and is continuous otherwise. It follows that there is (at least) one real root in $(1,2)$, at least one in in $(2,3)$, etc. up to $(k-1,k)$, so there are at least $k-1$ distinct real roots.
Additionally, for $x>k+1$ and $kge2$, we have
$$f_k(x)ge f_2(x+k-2).$$
It follows that there is another real root between $k+1$ and $x^*+k-2$.
So indeed, we have $k$ distinct real roots.
From the aboive, the smallest root is always in $(1,2)$.
If follows from $f_k+1(x)>f_k(x)$ for $xin(1,2)$ and the fact that all $f_k$ are strictly decreasing there, that $x_min $ decreases with increasing $k$. As a decreasing bounded sequence, it does have a limit.
add a comment |Â
up vote
5
down vote
up vote
5
down vote
For the base case,
$$tag1f_2(x)=frac1x-1+frac1x-2-frac1x-3, $$
one readily verifies that there is a root in $(1,2)$ and a root $x^*$ in $(3,+infty)$.
If we multiply out the denominators of
$$f_k(x)=frac1x-1+frac1x-2+ldots+frac1x-k-frac1x-k-1,$$
we obtain the equation
$$tag2(x-1)(x-2)cdots(x-k-1)f_k(x)=0,$$
which is a polynomial of degree (at most) $k$, so we expect $k$ solutions, but some of these may be complex or repeated or happen to be among $1,2,ldots, k+1$ and thus not allowed for the original equation.
But $f_k(x)$
has simple poles with jumps from $-infty$ to $+infty$ at $1,2,3,ldots, k$, and a simple pole with jump from $+infty$ to $-infty$ at $k+1$, and is continuous otherwise. It follows that there is (at least) one real root in $(1,2)$, at least one in in $(2,3)$, etc. up to $(k-1,k)$, so there are at least $k-1$ distinct real roots.
Additionally, for $x>k+1$ and $kge2$, we have
$$f_k(x)ge f_2(x+k-2).$$
It follows that there is another real root between $k+1$ and $x^*+k-2$.
So indeed, we have $k$ distinct real roots.
From the aboive, the smallest root is always in $(1,2)$.
If follows from $f_k+1(x)>f_k(x)$ for $xin(1,2)$ and the fact that all $f_k$ are strictly decreasing there, that $x_min $ decreases with increasing $k$. As a decreasing bounded sequence, it does have a limit.
For the base case,
$$tag1f_2(x)=frac1x-1+frac1x-2-frac1x-3, $$
one readily verifies that there is a root in $(1,2)$ and a root $x^*$ in $(3,+infty)$.
If we multiply out the denominators of
$$f_k(x)=frac1x-1+frac1x-2+ldots+frac1x-k-frac1x-k-1,$$
we obtain the equation
$$tag2(x-1)(x-2)cdots(x-k-1)f_k(x)=0,$$
which is a polynomial of degree (at most) $k$, so we expect $k$ solutions, but some of these may be complex or repeated or happen to be among $1,2,ldots, k+1$ and thus not allowed for the original equation.
But $f_k(x)$
has simple poles with jumps from $-infty$ to $+infty$ at $1,2,3,ldots, k$, and a simple pole with jump from $+infty$ to $-infty$ at $k+1$, and is continuous otherwise. It follows that there is (at least) one real root in $(1,2)$, at least one in in $(2,3)$, etc. up to $(k-1,k)$, so there are at least $k-1$ distinct real roots.
Additionally, for $x>k+1$ and $kge2$, we have
$$f_k(x)ge f_2(x+k-2).$$
It follows that there is another real root between $k+1$ and $x^*+k-2$.
So indeed, we have $k$ distinct real roots.
From the aboive, the smallest root is always in $(1,2)$.
If follows from $f_k+1(x)>f_k(x)$ for $xin(1,2)$ and the fact that all $f_k$ are strictly decreasing there, that $x_min $ decreases with increasing $k$. As a decreasing bounded sequence, it does have a limit.
answered Aug 7 at 14:52


Hagen von Eitzen
266k21259478
266k21259478
add a comment |Â
add a comment |Â
up vote
2
down vote
Considering that you look for the first zero of function
$$f(x)=sum_i=1^k frac 1x-i-frac1 x-k-1$$ which can write, using harmonic numbers,
$$f(x)=H_-x-H_k-x-frac1x-k-1$$ remove the asymptotes using
$$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)left(H_2-x-H_k-x-frac1x-k-1 right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get
$$g(x)=-1+(x-1) left(-frac1k+psi ^(0)(k)+gamma
+1right)+Oleft((x-1)^2right)$$ Ignoring the higher order terms, this gives as an approximation
$$x_est=1+frackkleft(gamma +1+ psi ^(0)(k)right)-1$$ which seems to be "decent" (and, for sure, confirms your claims).
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.66667 & 1.58579 \
3 & 1.46154 & 1.46791 \
4 & 1.38710 & 1.41082 \
5 & 1.34682 & 1.37605 \
6 & 1.32086 & 1.35209 \
7 & 1.30238 & 1.33430 \
8 & 1.28836 & 1.32040 \
9 & 1.27726 & 1.30914 \
10 & 1.26817 & 1.29976 \
11 & 1.26055 & 1.29179 \
12 & 1.25403 & 1.28489 \
13 & 1.24837 & 1.27884 \
14 & 1.24339 & 1.27347 \
15 & 1.23895 & 1.26867 \
16 & 1.23498 & 1.26433 \
17 & 1.23138 & 1.26039 \
18 & 1.22810 & 1.25678 \
19 & 1.22510 & 1.25346 \
20 & 1.22233 & 1.25039
endarray
right)$$ For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac1log left(kright)+gamma +1$$
For $k=1000$, the exact solution is $1.12955$ while the first approximation gives $1.11788$ and the second $1.11786$.
Using such estimates would make Newton method converging quite fast (shown below for $k=1000$).
$$left(
beginarraycc
n & x_n \
0 & 1.117855442 \
1 & 1.129429575 \
2 & 1.129545489 \
3 & 1.129545500
endarray
right)$$
Edit
We can obtain much better approximations if, instead of using a Taylor expansion of $g(x)$ to $Oleft((x-1)^2right)$, we build the simplest $[1,1]$ Padé approximant (which is equivalent to an $Oleft((x-1)^3right)$ Taylor expansion). This would lead to
$$x=1+ frac6 (k+k (psi ^(0)(k)+gamma )-1)pi ^2 k+6 (k+gamma (gamma
k+k-2)-1)-6 k psi ^(1)(k)+6 psi ^(0)(k) (2 gamma k+k+k psi
^(0)(k)-2)$$ Repeating the same calculations as above, the results are
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.60000 & 1.58579 \
3 & 1.46429 & 1.46791 \
4 & 1.40435 & 1.41082 \
5 & 1.36900 & 1.37605 \
6 & 1.34504 & 1.35209 \
7 & 1.32741 & 1.33430 \
8 & 1.31371 & 1.32040 \
9 & 1.30266 & 1.30914 \
10 & 1.29348 & 1.29976 \
11 & 1.28569 & 1.29179 \
12 & 1.27897 & 1.28489 \
13 & 1.27308 & 1.27884 \
14 & 1.26787 & 1.27347 \
15 & 1.26320 & 1.26867 \
16 & 1.25899 & 1.26433 \
17 & 1.25516 & 1.26039 \
18 & 1.25166 & 1.25678 \
19 & 1.24844 & 1.25346 \
20 & 1.24547 & 1.25039
endarray
right)$$
For $k=1000$, this would give as an estimate $1.12829$ for an exact value of $1.12955$.
For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac6 (log (k)+gamma +1)6 log (k) (log (k)+2 gamma +1)+pi ^2+6 gamma
(1+gamma )+6$$
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
add a comment |Â
up vote
2
down vote
Considering that you look for the first zero of function
$$f(x)=sum_i=1^k frac 1x-i-frac1 x-k-1$$ which can write, using harmonic numbers,
$$f(x)=H_-x-H_k-x-frac1x-k-1$$ remove the asymptotes using
$$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)left(H_2-x-H_k-x-frac1x-k-1 right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get
$$g(x)=-1+(x-1) left(-frac1k+psi ^(0)(k)+gamma
+1right)+Oleft((x-1)^2right)$$ Ignoring the higher order terms, this gives as an approximation
$$x_est=1+frackkleft(gamma +1+ psi ^(0)(k)right)-1$$ which seems to be "decent" (and, for sure, confirms your claims).
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.66667 & 1.58579 \
3 & 1.46154 & 1.46791 \
4 & 1.38710 & 1.41082 \
5 & 1.34682 & 1.37605 \
6 & 1.32086 & 1.35209 \
7 & 1.30238 & 1.33430 \
8 & 1.28836 & 1.32040 \
9 & 1.27726 & 1.30914 \
10 & 1.26817 & 1.29976 \
11 & 1.26055 & 1.29179 \
12 & 1.25403 & 1.28489 \
13 & 1.24837 & 1.27884 \
14 & 1.24339 & 1.27347 \
15 & 1.23895 & 1.26867 \
16 & 1.23498 & 1.26433 \
17 & 1.23138 & 1.26039 \
18 & 1.22810 & 1.25678 \
19 & 1.22510 & 1.25346 \
20 & 1.22233 & 1.25039
endarray
right)$$ For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac1log left(kright)+gamma +1$$
For $k=1000$, the exact solution is $1.12955$ while the first approximation gives $1.11788$ and the second $1.11786$.
Using such estimates would make Newton method converging quite fast (shown below for $k=1000$).
$$left(
beginarraycc
n & x_n \
0 & 1.117855442 \
1 & 1.129429575 \
2 & 1.129545489 \
3 & 1.129545500
endarray
right)$$
Edit
We can obtain much better approximations if, instead of using a Taylor expansion of $g(x)$ to $Oleft((x-1)^2right)$, we build the simplest $[1,1]$ Padé approximant (which is equivalent to an $Oleft((x-1)^3right)$ Taylor expansion). This would lead to
$$x=1+ frac6 (k+k (psi ^(0)(k)+gamma )-1)pi ^2 k+6 (k+gamma (gamma
k+k-2)-1)-6 k psi ^(1)(k)+6 psi ^(0)(k) (2 gamma k+k+k psi
^(0)(k)-2)$$ Repeating the same calculations as above, the results are
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.60000 & 1.58579 \
3 & 1.46429 & 1.46791 \
4 & 1.40435 & 1.41082 \
5 & 1.36900 & 1.37605 \
6 & 1.34504 & 1.35209 \
7 & 1.32741 & 1.33430 \
8 & 1.31371 & 1.32040 \
9 & 1.30266 & 1.30914 \
10 & 1.29348 & 1.29976 \
11 & 1.28569 & 1.29179 \
12 & 1.27897 & 1.28489 \
13 & 1.27308 & 1.27884 \
14 & 1.26787 & 1.27347 \
15 & 1.26320 & 1.26867 \
16 & 1.25899 & 1.26433 \
17 & 1.25516 & 1.26039 \
18 & 1.25166 & 1.25678 \
19 & 1.24844 & 1.25346 \
20 & 1.24547 & 1.25039
endarray
right)$$
For $k=1000$, this would give as an estimate $1.12829$ for an exact value of $1.12955$.
For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac6 (log (k)+gamma +1)6 log (k) (log (k)+2 gamma +1)+pi ^2+6 gamma
(1+gamma )+6$$
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Considering that you look for the first zero of function
$$f(x)=sum_i=1^k frac 1x-i-frac1 x-k-1$$ which can write, using harmonic numbers,
$$f(x)=H_-x-H_k-x-frac1x-k-1$$ remove the asymptotes using
$$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)left(H_2-x-H_k-x-frac1x-k-1 right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get
$$g(x)=-1+(x-1) left(-frac1k+psi ^(0)(k)+gamma
+1right)+Oleft((x-1)^2right)$$ Ignoring the higher order terms, this gives as an approximation
$$x_est=1+frackkleft(gamma +1+ psi ^(0)(k)right)-1$$ which seems to be "decent" (and, for sure, confirms your claims).
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.66667 & 1.58579 \
3 & 1.46154 & 1.46791 \
4 & 1.38710 & 1.41082 \
5 & 1.34682 & 1.37605 \
6 & 1.32086 & 1.35209 \
7 & 1.30238 & 1.33430 \
8 & 1.28836 & 1.32040 \
9 & 1.27726 & 1.30914 \
10 & 1.26817 & 1.29976 \
11 & 1.26055 & 1.29179 \
12 & 1.25403 & 1.28489 \
13 & 1.24837 & 1.27884 \
14 & 1.24339 & 1.27347 \
15 & 1.23895 & 1.26867 \
16 & 1.23498 & 1.26433 \
17 & 1.23138 & 1.26039 \
18 & 1.22810 & 1.25678 \
19 & 1.22510 & 1.25346 \
20 & 1.22233 & 1.25039
endarray
right)$$ For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac1log left(kright)+gamma +1$$
For $k=1000$, the exact solution is $1.12955$ while the first approximation gives $1.11788$ and the second $1.11786$.
Using such estimates would make Newton method converging quite fast (shown below for $k=1000$).
$$left(
beginarraycc
n & x_n \
0 & 1.117855442 \
1 & 1.129429575 \
2 & 1.129545489 \
3 & 1.129545500
endarray
right)$$
Edit
We can obtain much better approximations if, instead of using a Taylor expansion of $g(x)$ to $Oleft((x-1)^2right)$, we build the simplest $[1,1]$ Padé approximant (which is equivalent to an $Oleft((x-1)^3right)$ Taylor expansion). This would lead to
$$x=1+ frac6 (k+k (psi ^(0)(k)+gamma )-1)pi ^2 k+6 (k+gamma (gamma
k+k-2)-1)-6 k psi ^(1)(k)+6 psi ^(0)(k) (2 gamma k+k+k psi
^(0)(k)-2)$$ Repeating the same calculations as above, the results are
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.60000 & 1.58579 \
3 & 1.46429 & 1.46791 \
4 & 1.40435 & 1.41082 \
5 & 1.36900 & 1.37605 \
6 & 1.34504 & 1.35209 \
7 & 1.32741 & 1.33430 \
8 & 1.31371 & 1.32040 \
9 & 1.30266 & 1.30914 \
10 & 1.29348 & 1.29976 \
11 & 1.28569 & 1.29179 \
12 & 1.27897 & 1.28489 \
13 & 1.27308 & 1.27884 \
14 & 1.26787 & 1.27347 \
15 & 1.26320 & 1.26867 \
16 & 1.25899 & 1.26433 \
17 & 1.25516 & 1.26039 \
18 & 1.25166 & 1.25678 \
19 & 1.24844 & 1.25346 \
20 & 1.24547 & 1.25039
endarray
right)$$
For $k=1000$, this would give as an estimate $1.12829$ for an exact value of $1.12955$.
For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac6 (log (k)+gamma +1)6 log (k) (log (k)+2 gamma +1)+pi ^2+6 gamma
(1+gamma )+6$$
Considering that you look for the first zero of function
$$f(x)=sum_i=1^k frac 1x-i-frac1 x-k-1$$ which can write, using harmonic numbers,
$$f(x)=H_-x-H_k-x-frac1x-k-1$$ remove the asymptotes using
$$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)left(H_2-x-H_k-x-frac1x-k-1 right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get
$$g(x)=-1+(x-1) left(-frac1k+psi ^(0)(k)+gamma
+1right)+Oleft((x-1)^2right)$$ Ignoring the higher order terms, this gives as an approximation
$$x_est=1+frackkleft(gamma +1+ psi ^(0)(k)right)-1$$ which seems to be "decent" (and, for sure, confirms your claims).
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.66667 & 1.58579 \
3 & 1.46154 & 1.46791 \
4 & 1.38710 & 1.41082 \
5 & 1.34682 & 1.37605 \
6 & 1.32086 & 1.35209 \
7 & 1.30238 & 1.33430 \
8 & 1.28836 & 1.32040 \
9 & 1.27726 & 1.30914 \
10 & 1.26817 & 1.29976 \
11 & 1.26055 & 1.29179 \
12 & 1.25403 & 1.28489 \
13 & 1.24837 & 1.27884 \
14 & 1.24339 & 1.27347 \
15 & 1.23895 & 1.26867 \
16 & 1.23498 & 1.26433 \
17 & 1.23138 & 1.26039 \
18 & 1.22810 & 1.25678 \
19 & 1.22510 & 1.25346 \
20 & 1.22233 & 1.25039
endarray
right)$$ For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac1log left(kright)+gamma +1$$
For $k=1000$, the exact solution is $1.12955$ while the first approximation gives $1.11788$ and the second $1.11786$.
Using such estimates would make Newton method converging quite fast (shown below for $k=1000$).
$$left(
beginarraycc
n & x_n \
0 & 1.117855442 \
1 & 1.129429575 \
2 & 1.129545489 \
3 & 1.129545500
endarray
right)$$
Edit
We can obtain much better approximations if, instead of using a Taylor expansion of $g(x)$ to $Oleft((x-1)^2right)$, we build the simplest $[1,1]$ Padé approximant (which is equivalent to an $Oleft((x-1)^3right)$ Taylor expansion). This would lead to
$$x=1+ frac6 (k+k (psi ^(0)(k)+gamma )-1)pi ^2 k+6 (k+gamma (gamma
k+k-2)-1)-6 k psi ^(1)(k)+6 psi ^(0)(k) (2 gamma k+k+k psi
^(0)(k)-2)$$ Repeating the same calculations as above, the results are
$$left(
beginarrayccc
k & x_est & x_sol \
2 & 1.60000 & 1.58579 \
3 & 1.46429 & 1.46791 \
4 & 1.40435 & 1.41082 \
5 & 1.36900 & 1.37605 \
6 & 1.34504 & 1.35209 \
7 & 1.32741 & 1.33430 \
8 & 1.31371 & 1.32040 \
9 & 1.30266 & 1.30914 \
10 & 1.29348 & 1.29976 \
11 & 1.28569 & 1.29179 \
12 & 1.27897 & 1.28489 \
13 & 1.27308 & 1.27884 \
14 & 1.26787 & 1.27347 \
15 & 1.26320 & 1.26867 \
16 & 1.25899 & 1.26433 \
17 & 1.25516 & 1.26039 \
18 & 1.25166 & 1.25678 \
19 & 1.24844 & 1.25346 \
20 & 1.24547 & 1.25039
endarray
right)$$
For $k=1000$, this would give as an estimate $1.12829$ for an exact value of $1.12955$.
For infinitely large values of $k$, the asymptotics of the estimate would be
$$x_est=1+frac6 (log (k)+gamma +1)6 log (k) (log (k)+2 gamma +1)+pi ^2+6 gamma
(1+gamma )+6$$
edited Aug 10 at 8:04
answered Aug 8 at 7:16
Claude Leibovici
112k1155127
112k1155127
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
add a comment |Â
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
Interesting! What does that notation $psi^(0)(k)$ mean by the way?
– TheSimpliFire
Aug 8 at 8:10
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
@TheSimpliFire. This is "just" the digamma function.
– Claude Leibovici
Aug 8 at 8:18
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
Ah, I can see the resemblance of $psi$ to $-f$ now: here and here
– TheSimpliFire
Aug 8 at 8:20
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
@TheSimpliFire. We use to write $psi^(n)(k)$ for the polygamma function and, in usual notations,$psi^(0)(k)=psi(k)$
– Claude Leibovici
Aug 8 at 8:22
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2874991%2fsome-interesting-observations-on-a-sum-of-reciprocals%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
It is easy to prove that it has at least $k-1$ distinct real soutions
– Exodd
Aug 7 at 14:22
and also it is true that $1<x_min<2$ for every $k$ and it converges to 1
– Exodd
Aug 7 at 14:24
@Exodd: if you do that you know it has $k$ distinct solutions because of the limits of each side as $x to pm infty$
– Ross Millikan
Aug 7 at 14:27
The method I used (removing the asymptotes) has been the basis of significant improvements in chemical engineering calculations (in particular for the so-called Rachford-Rice and Underwood equations). I have published quite a lor of papers for these. I had a lot of fun with your (may I confess that I cannot resist an equation ?). Cheers.
– Claude Leibovici
Aug 8 at 8:05
I updated my answer for some improvements. Cheers and thanks for the problem.
– Claude Leibovici
Aug 10 at 8:05