Prove that a product's limit converges to a sum
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
I accidentally discovered this equality, in which I can prove numerically using python.
$$lim_ktoinfty sqrt[n] prod_i=1^n(x_i+k) - k = fracsum_i=1^nx_in$$
But I need to prove this equality in an algebraic way and I could not get anywhere. The right-hand side is nothing more than an arithmetic mean, while the left-hand side is a modification of the geometric mean.
I hope someone can help me at this point.
real-analysis discrete-mathematics means
New contributor
add a comment |Â
up vote
3
down vote
favorite
I accidentally discovered this equality, in which I can prove numerically using python.
$$lim_ktoinfty sqrt[n] prod_i=1^n(x_i+k) - k = fracsum_i=1^nx_in$$
But I need to prove this equality in an algebraic way and I could not get anywhere. The right-hand side is nothing more than an arithmetic mean, while the left-hand side is a modification of the geometric mean.
I hope someone can help me at this point.
real-analysis discrete-mathematics means
New contributor
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I accidentally discovered this equality, in which I can prove numerically using python.
$$lim_ktoinfty sqrt[n] prod_i=1^n(x_i+k) - k = fracsum_i=1^nx_in$$
But I need to prove this equality in an algebraic way and I could not get anywhere. The right-hand side is nothing more than an arithmetic mean, while the left-hand side is a modification of the geometric mean.
I hope someone can help me at this point.
real-analysis discrete-mathematics means
New contributor
I accidentally discovered this equality, in which I can prove numerically using python.
$$lim_ktoinfty sqrt[n] prod_i=1^n(x_i+k) - k = fracsum_i=1^nx_in$$
But I need to prove this equality in an algebraic way and I could not get anywhere. The right-hand side is nothing more than an arithmetic mean, while the left-hand side is a modification of the geometric mean.
I hope someone can help me at this point.
real-analysis discrete-mathematics means
real-analysis discrete-mathematics means
New contributor
New contributor
edited 2 hours ago
New contributor
asked 3 hours ago
Gustavo Ale
213
213
New contributor
New contributor
add a comment |Â
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
2
down vote
Assume that $k$ is greater than twice $|x_1|+ldots+|x_n|$ and also greater than twice $M=x_1^2+ldots+x_n^2$. Over the interval $left(-frac12,frac12right)$ we have
$$ e^x = 1+x+C(x)x^2, qquad log(1+x)=x+D(x)x^2 $$
with $|C(x)|,|D(x)|leq 1$. It follows that
$$begineqnarray*textGM(x_1+k,ldots,x_n+k)&=&kcdot textGMleft(1+tfracx_1k,ldots,1+tfracx_nkright)\&=&kexpleft[frac1nsum_j=1^nlogleft(1+fracx_jkright)right]\&=&kexpleft[frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]\&=&kleft[1+frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]endeqnarray*$$
and the claim is proved. It looks very reasonable also without a formal proof: the magnitude of the difference between the arithmetic mean and the geometric mean is controlled by $fractextVar(x_1,ldots,x_n)textAM(x_1,ldots,x_n)$. A translation towards the right leaves the variance unchanged and increases the mean.
add a comment |Â
up vote
1
down vote
Your expression under limit can be written as $k(a-b) $ and further $a-b=(a^n-b^n) / c$ where $a, b$ tend to $1$ and $c$ tends to $n$ as $ktoinfty$. And $$a^n-b^n=prod_i=1^nleft(1+fracx_ikright)-1$$ I hope you can continue it from here.
add a comment |Â
up vote
1
down vote
You could even get an interesting asymptotics considering $$y_n=sqrt[n] prod_i=1^n(x_i+k)$$ $$ log(y_n)=frac 1n sum_i=1^nlogleft((x_i+k)right)=frac 1n sum_i=1^nleft(log(k)+logleft(1+frac x_ikright)right)$$ Assuming that $x_1<x_2<cdots < x_n ll k$, use the Taylor expansion of $logleft(1+frac x_ikright)$. Then continuing with Taylor series
$$y_n=e^log(y_n)implies
y_n=k+fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ making
$$ sqrt[n] prod_i=1^n(x_i+k) - k =fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$
For example, using $n=10$, $k=1000$ and $x_i=p_i$, the approximation would give $frac2572671200000approx 12.8634$ while the exact calculation would lead to $approx 12.8639$
1
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Assume that $k$ is greater than twice $|x_1|+ldots+|x_n|$ and also greater than twice $M=x_1^2+ldots+x_n^2$. Over the interval $left(-frac12,frac12right)$ we have
$$ e^x = 1+x+C(x)x^2, qquad log(1+x)=x+D(x)x^2 $$
with $|C(x)|,|D(x)|leq 1$. It follows that
$$begineqnarray*textGM(x_1+k,ldots,x_n+k)&=&kcdot textGMleft(1+tfracx_1k,ldots,1+tfracx_nkright)\&=&kexpleft[frac1nsum_j=1^nlogleft(1+fracx_jkright)right]\&=&kexpleft[frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]\&=&kleft[1+frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]endeqnarray*$$
and the claim is proved. It looks very reasonable also without a formal proof: the magnitude of the difference between the arithmetic mean and the geometric mean is controlled by $fractextVar(x_1,ldots,x_n)textAM(x_1,ldots,x_n)$. A translation towards the right leaves the variance unchanged and increases the mean.
add a comment |Â
up vote
2
down vote
Assume that $k$ is greater than twice $|x_1|+ldots+|x_n|$ and also greater than twice $M=x_1^2+ldots+x_n^2$. Over the interval $left(-frac12,frac12right)$ we have
$$ e^x = 1+x+C(x)x^2, qquad log(1+x)=x+D(x)x^2 $$
with $|C(x)|,|D(x)|leq 1$. It follows that
$$begineqnarray*textGM(x_1+k,ldots,x_n+k)&=&kcdot textGMleft(1+tfracx_1k,ldots,1+tfracx_nkright)\&=&kexpleft[frac1nsum_j=1^nlogleft(1+fracx_jkright)right]\&=&kexpleft[frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]\&=&kleft[1+frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]endeqnarray*$$
and the claim is proved. It looks very reasonable also without a formal proof: the magnitude of the difference between the arithmetic mean and the geometric mean is controlled by $fractextVar(x_1,ldots,x_n)textAM(x_1,ldots,x_n)$. A translation towards the right leaves the variance unchanged and increases the mean.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Assume that $k$ is greater than twice $|x_1|+ldots+|x_n|$ and also greater than twice $M=x_1^2+ldots+x_n^2$. Over the interval $left(-frac12,frac12right)$ we have
$$ e^x = 1+x+C(x)x^2, qquad log(1+x)=x+D(x)x^2 $$
with $|C(x)|,|D(x)|leq 1$. It follows that
$$begineqnarray*textGM(x_1+k,ldots,x_n+k)&=&kcdot textGMleft(1+tfracx_1k,ldots,1+tfracx_nkright)\&=&kexpleft[frac1nsum_j=1^nlogleft(1+fracx_jkright)right]\&=&kexpleft[frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]\&=&kleft[1+frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]endeqnarray*$$
and the claim is proved. It looks very reasonable also without a formal proof: the magnitude of the difference between the arithmetic mean and the geometric mean is controlled by $fractextVar(x_1,ldots,x_n)textAM(x_1,ldots,x_n)$. A translation towards the right leaves the variance unchanged and increases the mean.
Assume that $k$ is greater than twice $|x_1|+ldots+|x_n|$ and also greater than twice $M=x_1^2+ldots+x_n^2$. Over the interval $left(-frac12,frac12right)$ we have
$$ e^x = 1+x+C(x)x^2, qquad log(1+x)=x+D(x)x^2 $$
with $|C(x)|,|D(x)|leq 1$. It follows that
$$begineqnarray*textGM(x_1+k,ldots,x_n+k)&=&kcdot textGMleft(1+tfracx_1k,ldots,1+tfracx_nkright)\&=&kexpleft[frac1nsum_j=1^nlogleft(1+fracx_jkright)right]\&=&kexpleft[frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]\&=&kleft[1+frac1ksum_j=1^nfracx_jn+Thetaleft(fracMnk^2right)right]endeqnarray*$$
and the claim is proved. It looks very reasonable also without a formal proof: the magnitude of the difference between the arithmetic mean and the geometric mean is controlled by $fractextVar(x_1,ldots,x_n)textAM(x_1,ldots,x_n)$. A translation towards the right leaves the variance unchanged and increases the mean.
answered 1 hour ago
Jack D'Aurizio
280k33272651
280k33272651
add a comment |Â
add a comment |Â
up vote
1
down vote
Your expression under limit can be written as $k(a-b) $ and further $a-b=(a^n-b^n) / c$ where $a, b$ tend to $1$ and $c$ tends to $n$ as $ktoinfty$. And $$a^n-b^n=prod_i=1^nleft(1+fracx_ikright)-1$$ I hope you can continue it from here.
add a comment |Â
up vote
1
down vote
Your expression under limit can be written as $k(a-b) $ and further $a-b=(a^n-b^n) / c$ where $a, b$ tend to $1$ and $c$ tends to $n$ as $ktoinfty$. And $$a^n-b^n=prod_i=1^nleft(1+fracx_ikright)-1$$ I hope you can continue it from here.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Your expression under limit can be written as $k(a-b) $ and further $a-b=(a^n-b^n) / c$ where $a, b$ tend to $1$ and $c$ tends to $n$ as $ktoinfty$. And $$a^n-b^n=prod_i=1^nleft(1+fracx_ikright)-1$$ I hope you can continue it from here.
Your expression under limit can be written as $k(a-b) $ and further $a-b=(a^n-b^n) / c$ where $a, b$ tend to $1$ and $c$ tends to $n$ as $ktoinfty$. And $$a^n-b^n=prod_i=1^nleft(1+fracx_ikright)-1$$ I hope you can continue it from here.
answered 3 hours ago
Paramanand Singh
47.3k555149
47.3k555149
add a comment |Â
add a comment |Â
up vote
1
down vote
You could even get an interesting asymptotics considering $$y_n=sqrt[n] prod_i=1^n(x_i+k)$$ $$ log(y_n)=frac 1n sum_i=1^nlogleft((x_i+k)right)=frac 1n sum_i=1^nleft(log(k)+logleft(1+frac x_ikright)right)$$ Assuming that $x_1<x_2<cdots < x_n ll k$, use the Taylor expansion of $logleft(1+frac x_ikright)$. Then continuing with Taylor series
$$y_n=e^log(y_n)implies
y_n=k+fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ making
$$ sqrt[n] prod_i=1^n(x_i+k) - k =fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$
For example, using $n=10$, $k=1000$ and $x_i=p_i$, the approximation would give $frac2572671200000approx 12.8634$ while the exact calculation would lead to $approx 12.8639$
1
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
add a comment |Â
up vote
1
down vote
You could even get an interesting asymptotics considering $$y_n=sqrt[n] prod_i=1^n(x_i+k)$$ $$ log(y_n)=frac 1n sum_i=1^nlogleft((x_i+k)right)=frac 1n sum_i=1^nleft(log(k)+logleft(1+frac x_ikright)right)$$ Assuming that $x_1<x_2<cdots < x_n ll k$, use the Taylor expansion of $logleft(1+frac x_ikright)$. Then continuing with Taylor series
$$y_n=e^log(y_n)implies
y_n=k+fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ making
$$ sqrt[n] prod_i=1^n(x_i+k) - k =fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$
For example, using $n=10$, $k=1000$ and $x_i=p_i$, the approximation would give $frac2572671200000approx 12.8634$ while the exact calculation would lead to $approx 12.8639$
1
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
add a comment |Â
up vote
1
down vote
up vote
1
down vote
You could even get an interesting asymptotics considering $$y_n=sqrt[n] prod_i=1^n(x_i+k)$$ $$ log(y_n)=frac 1n sum_i=1^nlogleft((x_i+k)right)=frac 1n sum_i=1^nleft(log(k)+logleft(1+frac x_ikright)right)$$ Assuming that $x_1<x_2<cdots < x_n ll k$, use the Taylor expansion of $logleft(1+frac x_ikright)$. Then continuing with Taylor series
$$y_n=e^log(y_n)implies
y_n=k+fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ making
$$ sqrt[n] prod_i=1^n(x_i+k) - k =fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$
For example, using $n=10$, $k=1000$ and $x_i=p_i$, the approximation would give $frac2572671200000approx 12.8634$ while the exact calculation would lead to $approx 12.8639$
You could even get an interesting asymptotics considering $$y_n=sqrt[n] prod_i=1^n(x_i+k)$$ $$ log(y_n)=frac 1n sum_i=1^nlogleft((x_i+k)right)=frac 1n sum_i=1^nleft(log(k)+logleft(1+frac x_ikright)right)$$ Assuming that $x_1<x_2<cdots < x_n ll k$, use the Taylor expansion of $logleft(1+frac x_ikright)$. Then continuing with Taylor series
$$y_n=e^log(y_n)implies
y_n=k+fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ making
$$ sqrt[n] prod_i=1^n(x_i+k) - k =fracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$
For example, using $n=10$, $k=1000$ and $x_i=p_i$, the approximation would give $frac2572671200000approx 12.8634$ while the exact calculation would lead to $approx 12.8639$
edited 24 mins ago
answered 35 mins ago
Claude Leibovici
115k1155130
115k1155130
1
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
add a comment |Â
1
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
1
1
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
@GustavoAle. For sure ! I just wanted to see what would be the asymptotics. Cheers.
â Claude Leibovici
8 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
At this point the equality is proven by applying the limit itself,$$lim_ktoinftysqrt[n] prod_i=1^n(x_i+k) - k = lim_ktoinftyfracsum_i=1^n x_in+fracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k+Oleft(frac1k^2right)$$ where $$lim_ktoinftyfracleft(sum_i=1^n x_iright)^2-nsum_i=1^n x^2_i2n^2 k + Oleft(frac1k^2right) to 0$$ remaining only $$fracsum_i=1^n x_in$$
â Gustavo Ale
6 mins ago
add a comment |Â
Gustavo Ale is a new contributor. Be nice, and check out our Code of Conduct.
Gustavo Ale is a new contributor. Be nice, and check out our Code of Conduct.
Gustavo Ale is a new contributor. Be nice, and check out our Code of Conduct.
Gustavo Ale is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2983641%2fprove-that-a-products-limit-converges-to-a-sum%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password