are real and complex analysis related in any way?
Clash Royale CLAN TAG#URR8PPP
up vote
5
down vote
favorite
I was watching a video and the lecturer goes over this
$$frac11+x^2 = sum_0^infty (-1)^n x^2n$$
$$|x| < 1$$
and it explains that the radius of convergence for this Taylor series centered at $x=0$ is 1 because it is being affected by $i$ and $-i$. Then, he goes on to talk about how Real Analysis is a glimpse into Complex Analysis.
In the same video, the lecturer also provided the example below where a complex function is defined by using Real Taylor Series.
$$e^ix = cos(x) +isin(x)$$
$$sum_0^inftyfrac(ix)^nn! = sum_0^infty(-1)^n+1 fracx^2n2n! + isum_0^infty(-1)^n+1 fracx^2n+12n+!$$
can someone help elaborate by what the lecturer probably meant? what is the connection between Real Analysis and Complex Analysis?
I understand that there are two different types of analyticity: real analytic and complex analytic? are they connected?
real-analysis complex-analysis
add a comment |Â
up vote
5
down vote
favorite
I was watching a video and the lecturer goes over this
$$frac11+x^2 = sum_0^infty (-1)^n x^2n$$
$$|x| < 1$$
and it explains that the radius of convergence for this Taylor series centered at $x=0$ is 1 because it is being affected by $i$ and $-i$. Then, he goes on to talk about how Real Analysis is a glimpse into Complex Analysis.
In the same video, the lecturer also provided the example below where a complex function is defined by using Real Taylor Series.
$$e^ix = cos(x) +isin(x)$$
$$sum_0^inftyfrac(ix)^nn! = sum_0^infty(-1)^n+1 fracx^2n2n! + isum_0^infty(-1)^n+1 fracx^2n+12n+!$$
can someone help elaborate by what the lecturer probably meant? what is the connection between Real Analysis and Complex Analysis?
I understand that there are two different types of analyticity: real analytic and complex analytic? are they connected?
real-analysis complex-analysis
add a comment |Â
up vote
5
down vote
favorite
up vote
5
down vote
favorite
I was watching a video and the lecturer goes over this
$$frac11+x^2 = sum_0^infty (-1)^n x^2n$$
$$|x| < 1$$
and it explains that the radius of convergence for this Taylor series centered at $x=0$ is 1 because it is being affected by $i$ and $-i$. Then, he goes on to talk about how Real Analysis is a glimpse into Complex Analysis.
In the same video, the lecturer also provided the example below where a complex function is defined by using Real Taylor Series.
$$e^ix = cos(x) +isin(x)$$
$$sum_0^inftyfrac(ix)^nn! = sum_0^infty(-1)^n+1 fracx^2n2n! + isum_0^infty(-1)^n+1 fracx^2n+12n+!$$
can someone help elaborate by what the lecturer probably meant? what is the connection between Real Analysis and Complex Analysis?
I understand that there are two different types of analyticity: real analytic and complex analytic? are they connected?
real-analysis complex-analysis
I was watching a video and the lecturer goes over this
$$frac11+x^2 = sum_0^infty (-1)^n x^2n$$
$$|x| < 1$$
and it explains that the radius of convergence for this Taylor series centered at $x=0$ is 1 because it is being affected by $i$ and $-i$. Then, he goes on to talk about how Real Analysis is a glimpse into Complex Analysis.
In the same video, the lecturer also provided the example below where a complex function is defined by using Real Taylor Series.
$$e^ix = cos(x) +isin(x)$$
$$sum_0^inftyfrac(ix)^nn! = sum_0^infty(-1)^n+1 fracx^2n2n! + isum_0^infty(-1)^n+1 fracx^2n+12n+!$$
can someone help elaborate by what the lecturer probably meant? what is the connection between Real Analysis and Complex Analysis?
I understand that there are two different types of analyticity: real analytic and complex analytic? are they connected?
real-analysis complex-analysis
real-analysis complex-analysis
asked 3 hours ago
user29418
270311
270311
add a comment |Â
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
1
down vote
Both real analyticity and complex analyticity can be defined as "locally given by a convergent power series". In the complex case, there are several equivalent definitions that, at first sight, look quite different.
Differentiability is one; the Cauchy integral formula is another. (In the world of real numbers, differentiability (even infinite differentiability) is much weaker than analyticity, and there's no analog of the Cauchy integral formula.)
Any real analytic function, say with domain of analyticity $Dsubseteqmathbb R$, is the restriction to $D$ of a complex analytic function whose domain of analyticity in $mathbb C$ includes $D$. The proof is just to use the same power series but now with complex values of the input variable.
The material about $e^ix$ quoted in the question is the special case where you start with the exponential function. It's analytic on the whole real line, and so its power series expansions about various real numbers give a complex analytic extension on a domain in $mathbb C$ that includes $mathbb R$. The exponential function, however, is unusually nice, in that its power series expansion around a single point, for example the expansion $e^x=sum_n=0^inftyx^n/n!$ around $0$, has infinite radius of convergence, so by plugging in complex values for $x$, we get a complex analytic function defined on all of $mathbb C$. In particular, we can plug in a pure imaginary value $ix$ (with real $x$) in place of $x$ and get the power series $e^ix=sum_n=0^inftyix^n/n!$. The rest of the quoted material is just separating the real and imaginary parts of this sum. The terms with even $n$ have an even power of $i$ so they're real, while the terms with odd $n$ have an odd power of $i$ so they're purely imaginary. After separating the real and imaginary parts in this way, and factoring out $i$ from the imaginary part, one finds that the real part is just (the power series of) $cos x$ and the imaginary part is $isin x$.
add a comment |Â
up vote
0
down vote
Well, given a real function (of x), replacing x with a complex variable, usually z, we got a complex function. This is particularly true with Taylor/Laurent series. However, this relationship touches only the very surface of real analysis and complex analysis, and extremely superficial.
Because of a complex variable has two independent varying quantities: the real part and the imaginary part, the derivability is extremely restricted, and specified by Cauchy-Riemann condition, which in turn is a system of partial differential equations (two equations with two functions) which bears a unique solution under certain boundary conditions. This makes the analytic complex functions very restrictive, and brings us the theorem of unique analytic continuation. Another thing about complex analysis is integral around a pole, which solves a lot of calculations, to mention the least.
On the other hand, because it doesn't enjoy the perfect-like properties of the Cauchy-Riemann condition brings us, the real analysis goes to the other extreme: the research on the most delicate fine structure of the real functions; things like Cantor set, Dirichletian function, Lebesgue integral, to mention a few.
New contributor
add a comment |Â
up vote
0
down vote
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help.
Real Power Series:
The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form
$$
f(x) = sum_k=0^inftya_k x^k.
$$
They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the "$x$" in $f(x)$) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$, we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$th Fibonacci number, then the radius of convergence ends up being $1/phi$, where $phi = (1+sqrt5)/2$ is the "golden ratio".
The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series. These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$. The Taylor series corresponding to $f(x)$, centered at $a$, is given by
$$
sum_k=0^inftyfracf^(k)(a)k!(x-a)^k.
$$
A Maclaurin series is just a Taylor series where $a=0$. After playing around with the series a bit, you may notice a few things about it.
- When $x=a$, the only non-zero term in the series is the first one: the constant $f(a)$. This means that no matter what the radius of convergence is for the series, it will at least agree with the original function at this one point.
- Taking derivatives of the power series and evaluating them at $x=a$, we see that the power series and the original function $f(x)$ have the same derivatives at $a$. This is by construction, and not a coincidence.
- If the radius of convergence is positive, then the interval on which the series converges will converge to the original function $f(x)$. In this way, we can say that $f(x)$ "equals" it's Taylor series, understanding that this equality may only hold on some interval centered at $a$, and perhaps not on the entire domain that $f(x)$ is defined on. You've already seen such an example with $(1+x^2)^-1$. In some extreme cases, such as with $sin x$ and $cos x$ as mentioned above, this equality ends up holding for ALL real values of $x$, and so the convergence is on all of $mathbbR$ and there is no harm completely equating the function with it's Taylor series.
- Even if the radius of convergence of a particular Taylor series (centered at $a$) is finite, it does not mean you cannot take other Taylor series (centered at other values than $a$) that also have a positive radius of convergence. For example, even though the Maclaurin series for $(1+x^2)^-1$ does not converge outside of the interval of radius 1, you can compute the Taylor series centered at $a=1$,
$$
sum_k=0^infty left( frac1k! fracd^kdx^kleft(frac11+x^2right)Bigg|_x=1(x-1)^k right).
$$
You will then find that this new power series converges on a slightly larger radius than 1, and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$.
The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series.
This is where complex analysis beings to come into play...
Complex Power Series:
Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy:
$$
f(z) = sum_k=0^inftyc_k z^k.
$$
The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface.
For starters, let me define some new vocabulary terms.
- We say that a complex function is complex differentiable at some fixed value $z=w$ if the following limit exists:
$$
lim_zto wfracf(z)-f(w)z-w.
$$
If this limit exists, we denote the value of the limit by $f'(w)$. This should look familiar, as it is the same limit definition for real-valued derivatives. - In the same way that we could create Taylor series for real-valued functions, we can also create Taylor series (center at $z=w$) for complex-valued functions, provided the functions have an infinite number of derivatives at $w$ (sometimes referred to as being holomorphic at $w$):
$$
sum_k=0^inftyfracf^(k)(w)k!(z-w)^k.
$$
If the Taylor series defined above has a positive radius of convergence then we say that $f$ is analytic at $w$. This vocabulary is also used in the case of real-valued Taylor series.
Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent:
$f(z)$ is complex differentiable at $z=w$.
$f(z)$ is holomorphic at $w$. That is $f$ has an infinite number of derivatives at $z=w$. In real analysis this condition is sometimes referred to as being "smooth".
$f(z)$ is analytic at $z=w$. That is it's Taylor series converges to $f(z)$ with some positive radius of convergence.
This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above:
- In real analysis there are a number of functions with only finitely many derivatives. For example, $x^2 sin(1/x)$ has only one derivative at $x=0$, and $f'(x)$ is not even continuous, let alone twice differentiable.
- There are also real-valued functions that are smooth (infinitely-differentiable) yet do not have a convergent Taylor series. An example is the function $e^-1/x^2$ which is smooth for every $x$, but for which every order of derivative at $x=0$ is equal to zero; this means every coefficient in the Maclaurin series is zero, and the radius of convergence is zero as well.
These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point.
Laurent Series:
A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant.
Let $f(z)$ be differentiable at some point $z=w$. Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first "singularity" -- a point where $f(z)$ fails to be differentiable.
Take the complex version of our previous example,
$$
f(z) = frac11+z^2.
$$
This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$, then $f(z)$ is differentiable/smooth/analytic at all values $wneq pm i$. This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$. This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk.
So now should come the question: when exactly does a complex function fail to be differentiable?
Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs:
- The function has a "singularity" (think division by zero).
- The function is defined in terms of the complex conjugate, $barz$. For example, $f(z) = |z|^2 = z,barz$.
- The function involves logarithms or non-integer exponents (these two ideas are actually related).
Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere. This is in stark contrast to the real-valued version $f(x) = |x|$, which is differentiable everywhere except at $x=0$ where there is a "corner".
Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course.
Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate.
Consider the previous example of $f(z) = (1+z^2)^-1$. Again, since the denominator can factor into the product $(z+i)(z-i)$, then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$. In other words if
$$
g(z) = (z+i)f(z) = fracz-i1+z^2,
$$
then $g(z) = z+i$ for all $zneq i$, and $lim_zto ig(z) = 2i$ exists, and is no longer a singularity.
Similarly, if $f(z) = 1/(1+z)^3$, then we again have singularities at $z=pm i$. This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$. Instead, we would need to multiply by three copies to get
$$
g(z) = (z-i)^3 f(z) = frac(z-i)^3(1+z^2)^3,
$$
which again means that $g(z) = (z+i)^3$ for all $zneq i$, and that $lim_zto ig(z) = (2i)^3 = -8i$ exists.
Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles. The order of the pole is the minimum number of liner terms need to remove the singularity.
The real-valued Maclaurin series for $sin x$ is given by
$$
sin x = sum_k=0^infty frac(-1)^k(2k+1)!x^2k+1,
$$
and has a infinite radius of convergence. This means that the complex version
$$
sin z = sum_k=0^infty frac(-1)^k(2k+1)!z^2k+1 = z - frac13!z^3 + frac15!z^5 - cdots
$$
also has an infinite range of convergence (such functions are called entire) and hence no singularities. From here it's easy to see that the function $(sin z)/z$ is analytic as well, with Taylor series
$$
fracsin zz = sum_k=0^infty frac(-1)^k(2k+1)!z^2k = 1 - frac13!z^2 + frac15!z^4 - cdots
$$
However, a function like $(sin z)/z^3$ is not analytic at $z=0$, since dividing $sin z$ by $z^3$ would give us the following expression:
$$
fracsin zz^3 = frac1z^2 - frac13! + frac15!z^2 - cdots
$$
But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series
$$
fracsin zz^3 - frac1z^2 = fracsin z - zz^3 = frac13! + frac15!z^2 - frac17!z^4 + cdots
$$
This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series. A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive:
$$
f(z) := sum_k = -infty^infty c_k z^k
$$
In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points.
A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(sin z)/z^3$. If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity. An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$:
beginalign
sin(1/z) &= sum_k=0^inftyfrac(-1)^k(2k+1)!(1/z)^k \
&= sum_k=0^inftyfrac(-1)^k(2k+1)!z^-k \
&= sum_k=-infty^0frac(-1)^-k(1-2k)!z^k
endalign
These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^-1/x^2$ was so pathological. The Taylor series for $e^z$ is given by
$$
e^z = sum_k=0^inftyfrac1k!z^k
$$
and so
beginalign
e^-1/z^2 &= sum_k=0^inftyfrac1k!(-1/z^2)^k \
&= sum_k=0^inftyfrac1k!(-1)^k z^-2k \
&= sum_k=-infty^0frac1(-k)!(-1)^-k z^2k.
endalign
Hence there is an essential singularity at $z=0$, and so even though the real-valued version is smooth at $x=0$, there is no hope of differentiability in a disk around $z=0$.
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Both real analyticity and complex analyticity can be defined as "locally given by a convergent power series". In the complex case, there are several equivalent definitions that, at first sight, look quite different.
Differentiability is one; the Cauchy integral formula is another. (In the world of real numbers, differentiability (even infinite differentiability) is much weaker than analyticity, and there's no analog of the Cauchy integral formula.)
Any real analytic function, say with domain of analyticity $Dsubseteqmathbb R$, is the restriction to $D$ of a complex analytic function whose domain of analyticity in $mathbb C$ includes $D$. The proof is just to use the same power series but now with complex values of the input variable.
The material about $e^ix$ quoted in the question is the special case where you start with the exponential function. It's analytic on the whole real line, and so its power series expansions about various real numbers give a complex analytic extension on a domain in $mathbb C$ that includes $mathbb R$. The exponential function, however, is unusually nice, in that its power series expansion around a single point, for example the expansion $e^x=sum_n=0^inftyx^n/n!$ around $0$, has infinite radius of convergence, so by plugging in complex values for $x$, we get a complex analytic function defined on all of $mathbb C$. In particular, we can plug in a pure imaginary value $ix$ (with real $x$) in place of $x$ and get the power series $e^ix=sum_n=0^inftyix^n/n!$. The rest of the quoted material is just separating the real and imaginary parts of this sum. The terms with even $n$ have an even power of $i$ so they're real, while the terms with odd $n$ have an odd power of $i$ so they're purely imaginary. After separating the real and imaginary parts in this way, and factoring out $i$ from the imaginary part, one finds that the real part is just (the power series of) $cos x$ and the imaginary part is $isin x$.
add a comment |Â
up vote
1
down vote
Both real analyticity and complex analyticity can be defined as "locally given by a convergent power series". In the complex case, there are several equivalent definitions that, at first sight, look quite different.
Differentiability is one; the Cauchy integral formula is another. (In the world of real numbers, differentiability (even infinite differentiability) is much weaker than analyticity, and there's no analog of the Cauchy integral formula.)
Any real analytic function, say with domain of analyticity $Dsubseteqmathbb R$, is the restriction to $D$ of a complex analytic function whose domain of analyticity in $mathbb C$ includes $D$. The proof is just to use the same power series but now with complex values of the input variable.
The material about $e^ix$ quoted in the question is the special case where you start with the exponential function. It's analytic on the whole real line, and so its power series expansions about various real numbers give a complex analytic extension on a domain in $mathbb C$ that includes $mathbb R$. The exponential function, however, is unusually nice, in that its power series expansion around a single point, for example the expansion $e^x=sum_n=0^inftyx^n/n!$ around $0$, has infinite radius of convergence, so by plugging in complex values for $x$, we get a complex analytic function defined on all of $mathbb C$. In particular, we can plug in a pure imaginary value $ix$ (with real $x$) in place of $x$ and get the power series $e^ix=sum_n=0^inftyix^n/n!$. The rest of the quoted material is just separating the real and imaginary parts of this sum. The terms with even $n$ have an even power of $i$ so they're real, while the terms with odd $n$ have an odd power of $i$ so they're purely imaginary. After separating the real and imaginary parts in this way, and factoring out $i$ from the imaginary part, one finds that the real part is just (the power series of) $cos x$ and the imaginary part is $isin x$.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Both real analyticity and complex analyticity can be defined as "locally given by a convergent power series". In the complex case, there are several equivalent definitions that, at first sight, look quite different.
Differentiability is one; the Cauchy integral formula is another. (In the world of real numbers, differentiability (even infinite differentiability) is much weaker than analyticity, and there's no analog of the Cauchy integral formula.)
Any real analytic function, say with domain of analyticity $Dsubseteqmathbb R$, is the restriction to $D$ of a complex analytic function whose domain of analyticity in $mathbb C$ includes $D$. The proof is just to use the same power series but now with complex values of the input variable.
The material about $e^ix$ quoted in the question is the special case where you start with the exponential function. It's analytic on the whole real line, and so its power series expansions about various real numbers give a complex analytic extension on a domain in $mathbb C$ that includes $mathbb R$. The exponential function, however, is unusually nice, in that its power series expansion around a single point, for example the expansion $e^x=sum_n=0^inftyx^n/n!$ around $0$, has infinite radius of convergence, so by plugging in complex values for $x$, we get a complex analytic function defined on all of $mathbb C$. In particular, we can plug in a pure imaginary value $ix$ (with real $x$) in place of $x$ and get the power series $e^ix=sum_n=0^inftyix^n/n!$. The rest of the quoted material is just separating the real and imaginary parts of this sum. The terms with even $n$ have an even power of $i$ so they're real, while the terms with odd $n$ have an odd power of $i$ so they're purely imaginary. After separating the real and imaginary parts in this way, and factoring out $i$ from the imaginary part, one finds that the real part is just (the power series of) $cos x$ and the imaginary part is $isin x$.
Both real analyticity and complex analyticity can be defined as "locally given by a convergent power series". In the complex case, there are several equivalent definitions that, at first sight, look quite different.
Differentiability is one; the Cauchy integral formula is another. (In the world of real numbers, differentiability (even infinite differentiability) is much weaker than analyticity, and there's no analog of the Cauchy integral formula.)
Any real analytic function, say with domain of analyticity $Dsubseteqmathbb R$, is the restriction to $D$ of a complex analytic function whose domain of analyticity in $mathbb C$ includes $D$. The proof is just to use the same power series but now with complex values of the input variable.
The material about $e^ix$ quoted in the question is the special case where you start with the exponential function. It's analytic on the whole real line, and so its power series expansions about various real numbers give a complex analytic extension on a domain in $mathbb C$ that includes $mathbb R$. The exponential function, however, is unusually nice, in that its power series expansion around a single point, for example the expansion $e^x=sum_n=0^inftyx^n/n!$ around $0$, has infinite radius of convergence, so by plugging in complex values for $x$, we get a complex analytic function defined on all of $mathbb C$. In particular, we can plug in a pure imaginary value $ix$ (with real $x$) in place of $x$ and get the power series $e^ix=sum_n=0^inftyix^n/n!$. The rest of the quoted material is just separating the real and imaginary parts of this sum. The terms with even $n$ have an even power of $i$ so they're real, while the terms with odd $n$ have an odd power of $i$ so they're purely imaginary. After separating the real and imaginary parts in this way, and factoring out $i$ from the imaginary part, one finds that the real part is just (the power series of) $cos x$ and the imaginary part is $isin x$.
answered 1 hour ago
Andreas Blass
48k349106
48k349106
add a comment |Â
add a comment |Â
up vote
0
down vote
Well, given a real function (of x), replacing x with a complex variable, usually z, we got a complex function. This is particularly true with Taylor/Laurent series. However, this relationship touches only the very surface of real analysis and complex analysis, and extremely superficial.
Because of a complex variable has two independent varying quantities: the real part and the imaginary part, the derivability is extremely restricted, and specified by Cauchy-Riemann condition, which in turn is a system of partial differential equations (two equations with two functions) which bears a unique solution under certain boundary conditions. This makes the analytic complex functions very restrictive, and brings us the theorem of unique analytic continuation. Another thing about complex analysis is integral around a pole, which solves a lot of calculations, to mention the least.
On the other hand, because it doesn't enjoy the perfect-like properties of the Cauchy-Riemann condition brings us, the real analysis goes to the other extreme: the research on the most delicate fine structure of the real functions; things like Cantor set, Dirichletian function, Lebesgue integral, to mention a few.
New contributor
add a comment |Â
up vote
0
down vote
Well, given a real function (of x), replacing x with a complex variable, usually z, we got a complex function. This is particularly true with Taylor/Laurent series. However, this relationship touches only the very surface of real analysis and complex analysis, and extremely superficial.
Because of a complex variable has two independent varying quantities: the real part and the imaginary part, the derivability is extremely restricted, and specified by Cauchy-Riemann condition, which in turn is a system of partial differential equations (two equations with two functions) which bears a unique solution under certain boundary conditions. This makes the analytic complex functions very restrictive, and brings us the theorem of unique analytic continuation. Another thing about complex analysis is integral around a pole, which solves a lot of calculations, to mention the least.
On the other hand, because it doesn't enjoy the perfect-like properties of the Cauchy-Riemann condition brings us, the real analysis goes to the other extreme: the research on the most delicate fine structure of the real functions; things like Cantor set, Dirichletian function, Lebesgue integral, to mention a few.
New contributor
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Well, given a real function (of x), replacing x with a complex variable, usually z, we got a complex function. This is particularly true with Taylor/Laurent series. However, this relationship touches only the very surface of real analysis and complex analysis, and extremely superficial.
Because of a complex variable has two independent varying quantities: the real part and the imaginary part, the derivability is extremely restricted, and specified by Cauchy-Riemann condition, which in turn is a system of partial differential equations (two equations with two functions) which bears a unique solution under certain boundary conditions. This makes the analytic complex functions very restrictive, and brings us the theorem of unique analytic continuation. Another thing about complex analysis is integral around a pole, which solves a lot of calculations, to mention the least.
On the other hand, because it doesn't enjoy the perfect-like properties of the Cauchy-Riemann condition brings us, the real analysis goes to the other extreme: the research on the most delicate fine structure of the real functions; things like Cantor set, Dirichletian function, Lebesgue integral, to mention a few.
New contributor
Well, given a real function (of x), replacing x with a complex variable, usually z, we got a complex function. This is particularly true with Taylor/Laurent series. However, this relationship touches only the very surface of real analysis and complex analysis, and extremely superficial.
Because of a complex variable has two independent varying quantities: the real part and the imaginary part, the derivability is extremely restricted, and specified by Cauchy-Riemann condition, which in turn is a system of partial differential equations (two equations with two functions) which bears a unique solution under certain boundary conditions. This makes the analytic complex functions very restrictive, and brings us the theorem of unique analytic continuation. Another thing about complex analysis is integral around a pole, which solves a lot of calculations, to mention the least.
On the other hand, because it doesn't enjoy the perfect-like properties of the Cauchy-Riemann condition brings us, the real analysis goes to the other extreme: the research on the most delicate fine structure of the real functions; things like Cantor set, Dirichletian function, Lebesgue integral, to mention a few.
New contributor
New contributor
answered 1 hour ago
Miles Zhou
948
948
New contributor
New contributor
add a comment |Â
add a comment |Â
up vote
0
down vote
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help.
Real Power Series:
The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form
$$
f(x) = sum_k=0^inftya_k x^k.
$$
They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the "$x$" in $f(x)$) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$, we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$th Fibonacci number, then the radius of convergence ends up being $1/phi$, where $phi = (1+sqrt5)/2$ is the "golden ratio".
The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series. These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$. The Taylor series corresponding to $f(x)$, centered at $a$, is given by
$$
sum_k=0^inftyfracf^(k)(a)k!(x-a)^k.
$$
A Maclaurin series is just a Taylor series where $a=0$. After playing around with the series a bit, you may notice a few things about it.
- When $x=a$, the only non-zero term in the series is the first one: the constant $f(a)$. This means that no matter what the radius of convergence is for the series, it will at least agree with the original function at this one point.
- Taking derivatives of the power series and evaluating them at $x=a$, we see that the power series and the original function $f(x)$ have the same derivatives at $a$. This is by construction, and not a coincidence.
- If the radius of convergence is positive, then the interval on which the series converges will converge to the original function $f(x)$. In this way, we can say that $f(x)$ "equals" it's Taylor series, understanding that this equality may only hold on some interval centered at $a$, and perhaps not on the entire domain that $f(x)$ is defined on. You've already seen such an example with $(1+x^2)^-1$. In some extreme cases, such as with $sin x$ and $cos x$ as mentioned above, this equality ends up holding for ALL real values of $x$, and so the convergence is on all of $mathbbR$ and there is no harm completely equating the function with it's Taylor series.
- Even if the radius of convergence of a particular Taylor series (centered at $a$) is finite, it does not mean you cannot take other Taylor series (centered at other values than $a$) that also have a positive radius of convergence. For example, even though the Maclaurin series for $(1+x^2)^-1$ does not converge outside of the interval of radius 1, you can compute the Taylor series centered at $a=1$,
$$
sum_k=0^infty left( frac1k! fracd^kdx^kleft(frac11+x^2right)Bigg|_x=1(x-1)^k right).
$$
You will then find that this new power series converges on a slightly larger radius than 1, and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$.
The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series.
This is where complex analysis beings to come into play...
Complex Power Series:
Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy:
$$
f(z) = sum_k=0^inftyc_k z^k.
$$
The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface.
For starters, let me define some new vocabulary terms.
- We say that a complex function is complex differentiable at some fixed value $z=w$ if the following limit exists:
$$
lim_zto wfracf(z)-f(w)z-w.
$$
If this limit exists, we denote the value of the limit by $f'(w)$. This should look familiar, as it is the same limit definition for real-valued derivatives. - In the same way that we could create Taylor series for real-valued functions, we can also create Taylor series (center at $z=w$) for complex-valued functions, provided the functions have an infinite number of derivatives at $w$ (sometimes referred to as being holomorphic at $w$):
$$
sum_k=0^inftyfracf^(k)(w)k!(z-w)^k.
$$
If the Taylor series defined above has a positive radius of convergence then we say that $f$ is analytic at $w$. This vocabulary is also used in the case of real-valued Taylor series.
Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent:
$f(z)$ is complex differentiable at $z=w$.
$f(z)$ is holomorphic at $w$. That is $f$ has an infinite number of derivatives at $z=w$. In real analysis this condition is sometimes referred to as being "smooth".
$f(z)$ is analytic at $z=w$. That is it's Taylor series converges to $f(z)$ with some positive radius of convergence.
This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above:
- In real analysis there are a number of functions with only finitely many derivatives. For example, $x^2 sin(1/x)$ has only one derivative at $x=0$, and $f'(x)$ is not even continuous, let alone twice differentiable.
- There are also real-valued functions that are smooth (infinitely-differentiable) yet do not have a convergent Taylor series. An example is the function $e^-1/x^2$ which is smooth for every $x$, but for which every order of derivative at $x=0$ is equal to zero; this means every coefficient in the Maclaurin series is zero, and the radius of convergence is zero as well.
These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point.
Laurent Series:
A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant.
Let $f(z)$ be differentiable at some point $z=w$. Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first "singularity" -- a point where $f(z)$ fails to be differentiable.
Take the complex version of our previous example,
$$
f(z) = frac11+z^2.
$$
This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$, then $f(z)$ is differentiable/smooth/analytic at all values $wneq pm i$. This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$. This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk.
So now should come the question: when exactly does a complex function fail to be differentiable?
Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs:
- The function has a "singularity" (think division by zero).
- The function is defined in terms of the complex conjugate, $barz$. For example, $f(z) = |z|^2 = z,barz$.
- The function involves logarithms or non-integer exponents (these two ideas are actually related).
Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere. This is in stark contrast to the real-valued version $f(x) = |x|$, which is differentiable everywhere except at $x=0$ where there is a "corner".
Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course.
Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate.
Consider the previous example of $f(z) = (1+z^2)^-1$. Again, since the denominator can factor into the product $(z+i)(z-i)$, then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$. In other words if
$$
g(z) = (z+i)f(z) = fracz-i1+z^2,
$$
then $g(z) = z+i$ for all $zneq i$, and $lim_zto ig(z) = 2i$ exists, and is no longer a singularity.
Similarly, if $f(z) = 1/(1+z)^3$, then we again have singularities at $z=pm i$. This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$. Instead, we would need to multiply by three copies to get
$$
g(z) = (z-i)^3 f(z) = frac(z-i)^3(1+z^2)^3,
$$
which again means that $g(z) = (z+i)^3$ for all $zneq i$, and that $lim_zto ig(z) = (2i)^3 = -8i$ exists.
Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles. The order of the pole is the minimum number of liner terms need to remove the singularity.
The real-valued Maclaurin series for $sin x$ is given by
$$
sin x = sum_k=0^infty frac(-1)^k(2k+1)!x^2k+1,
$$
and has a infinite radius of convergence. This means that the complex version
$$
sin z = sum_k=0^infty frac(-1)^k(2k+1)!z^2k+1 = z - frac13!z^3 + frac15!z^5 - cdots
$$
also has an infinite range of convergence (such functions are called entire) and hence no singularities. From here it's easy to see that the function $(sin z)/z$ is analytic as well, with Taylor series
$$
fracsin zz = sum_k=0^infty frac(-1)^k(2k+1)!z^2k = 1 - frac13!z^2 + frac15!z^4 - cdots
$$
However, a function like $(sin z)/z^3$ is not analytic at $z=0$, since dividing $sin z$ by $z^3$ would give us the following expression:
$$
fracsin zz^3 = frac1z^2 - frac13! + frac15!z^2 - cdots
$$
But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series
$$
fracsin zz^3 - frac1z^2 = fracsin z - zz^3 = frac13! + frac15!z^2 - frac17!z^4 + cdots
$$
This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series. A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive:
$$
f(z) := sum_k = -infty^infty c_k z^k
$$
In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points.
A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(sin z)/z^3$. If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity. An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$:
beginalign
sin(1/z) &= sum_k=0^inftyfrac(-1)^k(2k+1)!(1/z)^k \
&= sum_k=0^inftyfrac(-1)^k(2k+1)!z^-k \
&= sum_k=-infty^0frac(-1)^-k(1-2k)!z^k
endalign
These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^-1/x^2$ was so pathological. The Taylor series for $e^z$ is given by
$$
e^z = sum_k=0^inftyfrac1k!z^k
$$
and so
beginalign
e^-1/z^2 &= sum_k=0^inftyfrac1k!(-1/z^2)^k \
&= sum_k=0^inftyfrac1k!(-1)^k z^-2k \
&= sum_k=-infty^0frac1(-k)!(-1)^-k z^2k.
endalign
Hence there is an essential singularity at $z=0$, and so even though the real-valued version is smooth at $x=0$, there is no hope of differentiability in a disk around $z=0$.
add a comment |Â
up vote
0
down vote
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help.
Real Power Series:
The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form
$$
f(x) = sum_k=0^inftya_k x^k.
$$
They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the "$x$" in $f(x)$) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$, we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$th Fibonacci number, then the radius of convergence ends up being $1/phi$, where $phi = (1+sqrt5)/2$ is the "golden ratio".
The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series. These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$. The Taylor series corresponding to $f(x)$, centered at $a$, is given by
$$
sum_k=0^inftyfracf^(k)(a)k!(x-a)^k.
$$
A Maclaurin series is just a Taylor series where $a=0$. After playing around with the series a bit, you may notice a few things about it.
- When $x=a$, the only non-zero term in the series is the first one: the constant $f(a)$. This means that no matter what the radius of convergence is for the series, it will at least agree with the original function at this one point.
- Taking derivatives of the power series and evaluating them at $x=a$, we see that the power series and the original function $f(x)$ have the same derivatives at $a$. This is by construction, and not a coincidence.
- If the radius of convergence is positive, then the interval on which the series converges will converge to the original function $f(x)$. In this way, we can say that $f(x)$ "equals" it's Taylor series, understanding that this equality may only hold on some interval centered at $a$, and perhaps not on the entire domain that $f(x)$ is defined on. You've already seen such an example with $(1+x^2)^-1$. In some extreme cases, such as with $sin x$ and $cos x$ as mentioned above, this equality ends up holding for ALL real values of $x$, and so the convergence is on all of $mathbbR$ and there is no harm completely equating the function with it's Taylor series.
- Even if the radius of convergence of a particular Taylor series (centered at $a$) is finite, it does not mean you cannot take other Taylor series (centered at other values than $a$) that also have a positive radius of convergence. For example, even though the Maclaurin series for $(1+x^2)^-1$ does not converge outside of the interval of radius 1, you can compute the Taylor series centered at $a=1$,
$$
sum_k=0^infty left( frac1k! fracd^kdx^kleft(frac11+x^2right)Bigg|_x=1(x-1)^k right).
$$
You will then find that this new power series converges on a slightly larger radius than 1, and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$.
The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series.
This is where complex analysis beings to come into play...
Complex Power Series:
Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy:
$$
f(z) = sum_k=0^inftyc_k z^k.
$$
The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface.
For starters, let me define some new vocabulary terms.
- We say that a complex function is complex differentiable at some fixed value $z=w$ if the following limit exists:
$$
lim_zto wfracf(z)-f(w)z-w.
$$
If this limit exists, we denote the value of the limit by $f'(w)$. This should look familiar, as it is the same limit definition for real-valued derivatives. - In the same way that we could create Taylor series for real-valued functions, we can also create Taylor series (center at $z=w$) for complex-valued functions, provided the functions have an infinite number of derivatives at $w$ (sometimes referred to as being holomorphic at $w$):
$$
sum_k=0^inftyfracf^(k)(w)k!(z-w)^k.
$$
If the Taylor series defined above has a positive radius of convergence then we say that $f$ is analytic at $w$. This vocabulary is also used in the case of real-valued Taylor series.
Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent:
$f(z)$ is complex differentiable at $z=w$.
$f(z)$ is holomorphic at $w$. That is $f$ has an infinite number of derivatives at $z=w$. In real analysis this condition is sometimes referred to as being "smooth".
$f(z)$ is analytic at $z=w$. That is it's Taylor series converges to $f(z)$ with some positive radius of convergence.
This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above:
- In real analysis there are a number of functions with only finitely many derivatives. For example, $x^2 sin(1/x)$ has only one derivative at $x=0$, and $f'(x)$ is not even continuous, let alone twice differentiable.
- There are also real-valued functions that are smooth (infinitely-differentiable) yet do not have a convergent Taylor series. An example is the function $e^-1/x^2$ which is smooth for every $x$, but for which every order of derivative at $x=0$ is equal to zero; this means every coefficient in the Maclaurin series is zero, and the radius of convergence is zero as well.
These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point.
Laurent Series:
A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant.
Let $f(z)$ be differentiable at some point $z=w$. Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first "singularity" -- a point where $f(z)$ fails to be differentiable.
Take the complex version of our previous example,
$$
f(z) = frac11+z^2.
$$
This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$, then $f(z)$ is differentiable/smooth/analytic at all values $wneq pm i$. This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$. This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk.
So now should come the question: when exactly does a complex function fail to be differentiable?
Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs:
- The function has a "singularity" (think division by zero).
- The function is defined in terms of the complex conjugate, $barz$. For example, $f(z) = |z|^2 = z,barz$.
- The function involves logarithms or non-integer exponents (these two ideas are actually related).
Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere. This is in stark contrast to the real-valued version $f(x) = |x|$, which is differentiable everywhere except at $x=0$ where there is a "corner".
Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course.
Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate.
Consider the previous example of $f(z) = (1+z^2)^-1$. Again, since the denominator can factor into the product $(z+i)(z-i)$, then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$. In other words if
$$
g(z) = (z+i)f(z) = fracz-i1+z^2,
$$
then $g(z) = z+i$ for all $zneq i$, and $lim_zto ig(z) = 2i$ exists, and is no longer a singularity.
Similarly, if $f(z) = 1/(1+z)^3$, then we again have singularities at $z=pm i$. This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$. Instead, we would need to multiply by three copies to get
$$
g(z) = (z-i)^3 f(z) = frac(z-i)^3(1+z^2)^3,
$$
which again means that $g(z) = (z+i)^3$ for all $zneq i$, and that $lim_zto ig(z) = (2i)^3 = -8i$ exists.
Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles. The order of the pole is the minimum number of liner terms need to remove the singularity.
The real-valued Maclaurin series for $sin x$ is given by
$$
sin x = sum_k=0^infty frac(-1)^k(2k+1)!x^2k+1,
$$
and has a infinite radius of convergence. This means that the complex version
$$
sin z = sum_k=0^infty frac(-1)^k(2k+1)!z^2k+1 = z - frac13!z^3 + frac15!z^5 - cdots
$$
also has an infinite range of convergence (such functions are called entire) and hence no singularities. From here it's easy to see that the function $(sin z)/z$ is analytic as well, with Taylor series
$$
fracsin zz = sum_k=0^infty frac(-1)^k(2k+1)!z^2k = 1 - frac13!z^2 + frac15!z^4 - cdots
$$
However, a function like $(sin z)/z^3$ is not analytic at $z=0$, since dividing $sin z$ by $z^3$ would give us the following expression:
$$
fracsin zz^3 = frac1z^2 - frac13! + frac15!z^2 - cdots
$$
But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series
$$
fracsin zz^3 - frac1z^2 = fracsin z - zz^3 = frac13! + frac15!z^2 - frac17!z^4 + cdots
$$
This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series. A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive:
$$
f(z) := sum_k = -infty^infty c_k z^k
$$
In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points.
A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(sin z)/z^3$. If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity. An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$:
beginalign
sin(1/z) &= sum_k=0^inftyfrac(-1)^k(2k+1)!(1/z)^k \
&= sum_k=0^inftyfrac(-1)^k(2k+1)!z^-k \
&= sum_k=-infty^0frac(-1)^-k(1-2k)!z^k
endalign
These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^-1/x^2$ was so pathological. The Taylor series for $e^z$ is given by
$$
e^z = sum_k=0^inftyfrac1k!z^k
$$
and so
beginalign
e^-1/z^2 &= sum_k=0^inftyfrac1k!(-1/z^2)^k \
&= sum_k=0^inftyfrac1k!(-1)^k z^-2k \
&= sum_k=-infty^0frac1(-k)!(-1)^-k z^2k.
endalign
Hence there is an essential singularity at $z=0$, and so even though the real-valued version is smooth at $x=0$, there is no hope of differentiability in a disk around $z=0$.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help.
Real Power Series:
The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form
$$
f(x) = sum_k=0^inftya_k x^k.
$$
They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the "$x$" in $f(x)$) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$, we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$th Fibonacci number, then the radius of convergence ends up being $1/phi$, where $phi = (1+sqrt5)/2$ is the "golden ratio".
The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series. These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$. The Taylor series corresponding to $f(x)$, centered at $a$, is given by
$$
sum_k=0^inftyfracf^(k)(a)k!(x-a)^k.
$$
A Maclaurin series is just a Taylor series where $a=0$. After playing around with the series a bit, you may notice a few things about it.
- When $x=a$, the only non-zero term in the series is the first one: the constant $f(a)$. This means that no matter what the radius of convergence is for the series, it will at least agree with the original function at this one point.
- Taking derivatives of the power series and evaluating them at $x=a$, we see that the power series and the original function $f(x)$ have the same derivatives at $a$. This is by construction, and not a coincidence.
- If the radius of convergence is positive, then the interval on which the series converges will converge to the original function $f(x)$. In this way, we can say that $f(x)$ "equals" it's Taylor series, understanding that this equality may only hold on some interval centered at $a$, and perhaps not on the entire domain that $f(x)$ is defined on. You've already seen such an example with $(1+x^2)^-1$. In some extreme cases, such as with $sin x$ and $cos x$ as mentioned above, this equality ends up holding for ALL real values of $x$, and so the convergence is on all of $mathbbR$ and there is no harm completely equating the function with it's Taylor series.
- Even if the radius of convergence of a particular Taylor series (centered at $a$) is finite, it does not mean you cannot take other Taylor series (centered at other values than $a$) that also have a positive radius of convergence. For example, even though the Maclaurin series for $(1+x^2)^-1$ does not converge outside of the interval of radius 1, you can compute the Taylor series centered at $a=1$,
$$
sum_k=0^infty left( frac1k! fracd^kdx^kleft(frac11+x^2right)Bigg|_x=1(x-1)^k right).
$$
You will then find that this new power series converges on a slightly larger radius than 1, and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$.
The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series.
This is where complex analysis beings to come into play...
Complex Power Series:
Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy:
$$
f(z) = sum_k=0^inftyc_k z^k.
$$
The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface.
For starters, let me define some new vocabulary terms.
- We say that a complex function is complex differentiable at some fixed value $z=w$ if the following limit exists:
$$
lim_zto wfracf(z)-f(w)z-w.
$$
If this limit exists, we denote the value of the limit by $f'(w)$. This should look familiar, as it is the same limit definition for real-valued derivatives. - In the same way that we could create Taylor series for real-valued functions, we can also create Taylor series (center at $z=w$) for complex-valued functions, provided the functions have an infinite number of derivatives at $w$ (sometimes referred to as being holomorphic at $w$):
$$
sum_k=0^inftyfracf^(k)(w)k!(z-w)^k.
$$
If the Taylor series defined above has a positive radius of convergence then we say that $f$ is analytic at $w$. This vocabulary is also used in the case of real-valued Taylor series.
Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent:
$f(z)$ is complex differentiable at $z=w$.
$f(z)$ is holomorphic at $w$. That is $f$ has an infinite number of derivatives at $z=w$. In real analysis this condition is sometimes referred to as being "smooth".
$f(z)$ is analytic at $z=w$. That is it's Taylor series converges to $f(z)$ with some positive radius of convergence.
This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above:
- In real analysis there are a number of functions with only finitely many derivatives. For example, $x^2 sin(1/x)$ has only one derivative at $x=0$, and $f'(x)$ is not even continuous, let alone twice differentiable.
- There are also real-valued functions that are smooth (infinitely-differentiable) yet do not have a convergent Taylor series. An example is the function $e^-1/x^2$ which is smooth for every $x$, but for which every order of derivative at $x=0$ is equal to zero; this means every coefficient in the Maclaurin series is zero, and the radius of convergence is zero as well.
These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point.
Laurent Series:
A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant.
Let $f(z)$ be differentiable at some point $z=w$. Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first "singularity" -- a point where $f(z)$ fails to be differentiable.
Take the complex version of our previous example,
$$
f(z) = frac11+z^2.
$$
This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$, then $f(z)$ is differentiable/smooth/analytic at all values $wneq pm i$. This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$. This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk.
So now should come the question: when exactly does a complex function fail to be differentiable?
Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs:
- The function has a "singularity" (think division by zero).
- The function is defined in terms of the complex conjugate, $barz$. For example, $f(z) = |z|^2 = z,barz$.
- The function involves logarithms or non-integer exponents (these two ideas are actually related).
Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere. This is in stark contrast to the real-valued version $f(x) = |x|$, which is differentiable everywhere except at $x=0$ where there is a "corner".
Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course.
Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate.
Consider the previous example of $f(z) = (1+z^2)^-1$. Again, since the denominator can factor into the product $(z+i)(z-i)$, then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$. In other words if
$$
g(z) = (z+i)f(z) = fracz-i1+z^2,
$$
then $g(z) = z+i$ for all $zneq i$, and $lim_zto ig(z) = 2i$ exists, and is no longer a singularity.
Similarly, if $f(z) = 1/(1+z)^3$, then we again have singularities at $z=pm i$. This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$. Instead, we would need to multiply by three copies to get
$$
g(z) = (z-i)^3 f(z) = frac(z-i)^3(1+z^2)^3,
$$
which again means that $g(z) = (z+i)^3$ for all $zneq i$, and that $lim_zto ig(z) = (2i)^3 = -8i$ exists.
Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles. The order of the pole is the minimum number of liner terms need to remove the singularity.
The real-valued Maclaurin series for $sin x$ is given by
$$
sin x = sum_k=0^infty frac(-1)^k(2k+1)!x^2k+1,
$$
and has a infinite radius of convergence. This means that the complex version
$$
sin z = sum_k=0^infty frac(-1)^k(2k+1)!z^2k+1 = z - frac13!z^3 + frac15!z^5 - cdots
$$
also has an infinite range of convergence (such functions are called entire) and hence no singularities. From here it's easy to see that the function $(sin z)/z$ is analytic as well, with Taylor series
$$
fracsin zz = sum_k=0^infty frac(-1)^k(2k+1)!z^2k = 1 - frac13!z^2 + frac15!z^4 - cdots
$$
However, a function like $(sin z)/z^3$ is not analytic at $z=0$, since dividing $sin z$ by $z^3$ would give us the following expression:
$$
fracsin zz^3 = frac1z^2 - frac13! + frac15!z^2 - cdots
$$
But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series
$$
fracsin zz^3 - frac1z^2 = fracsin z - zz^3 = frac13! + frac15!z^2 - frac17!z^4 + cdots
$$
This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series. A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive:
$$
f(z) := sum_k = -infty^infty c_k z^k
$$
In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points.
A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(sin z)/z^3$. If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity. An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$:
beginalign
sin(1/z) &= sum_k=0^inftyfrac(-1)^k(2k+1)!(1/z)^k \
&= sum_k=0^inftyfrac(-1)^k(2k+1)!z^-k \
&= sum_k=-infty^0frac(-1)^-k(1-2k)!z^k
endalign
These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^-1/x^2$ was so pathological. The Taylor series for $e^z$ is given by
$$
e^z = sum_k=0^inftyfrac1k!z^k
$$
and so
beginalign
e^-1/z^2 &= sum_k=0^inftyfrac1k!(-1/z^2)^k \
&= sum_k=0^inftyfrac1k!(-1)^k z^-2k \
&= sum_k=-infty^0frac1(-k)!(-1)^-k z^2k.
endalign
Hence there is an essential singularity at $z=0$, and so even though the real-valued version is smooth at $x=0$, there is no hope of differentiability in a disk around $z=0$.
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help.
Real Power Series:
The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form
$$
f(x) = sum_k=0^inftya_k x^k.
$$
They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the "$x$" in $f(x)$) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$, we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$th Fibonacci number, then the radius of convergence ends up being $1/phi$, where $phi = (1+sqrt5)/2$ is the "golden ratio".
The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series. These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$. The Taylor series corresponding to $f(x)$, centered at $a$, is given by
$$
sum_k=0^inftyfracf^(k)(a)k!(x-a)^k.
$$
A Maclaurin series is just a Taylor series where $a=0$. After playing around with the series a bit, you may notice a few things about it.
- When $x=a$, the only non-zero term in the series is the first one: the constant $f(a)$. This means that no matter what the radius of convergence is for the series, it will at least agree with the original function at this one point.
- Taking derivatives of the power series and evaluating them at $x=a$, we see that the power series and the original function $f(x)$ have the same derivatives at $a$. This is by construction, and not a coincidence.
- If the radius of convergence is positive, then the interval on which the series converges will converge to the original function $f(x)$. In this way, we can say that $f(x)$ "equals" it's Taylor series, understanding that this equality may only hold on some interval centered at $a$, and perhaps not on the entire domain that $f(x)$ is defined on. You've already seen such an example with $(1+x^2)^-1$. In some extreme cases, such as with $sin x$ and $cos x$ as mentioned above, this equality ends up holding for ALL real values of $x$, and so the convergence is on all of $mathbbR$ and there is no harm completely equating the function with it's Taylor series.
- Even if the radius of convergence of a particular Taylor series (centered at $a$) is finite, it does not mean you cannot take other Taylor series (centered at other values than $a$) that also have a positive radius of convergence. For example, even though the Maclaurin series for $(1+x^2)^-1$ does not converge outside of the interval of radius 1, you can compute the Taylor series centered at $a=1$,
$$
sum_k=0^infty left( frac1k! fracd^kdx^kleft(frac11+x^2right)Bigg|_x=1(x-1)^k right).
$$
You will then find that this new power series converges on a slightly larger radius than 1, and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$.
The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series.
This is where complex analysis beings to come into play...
Complex Power Series:
Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy:
$$
f(z) = sum_k=0^inftyc_k z^k.
$$
The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface.
For starters, let me define some new vocabulary terms.
- We say that a complex function is complex differentiable at some fixed value $z=w$ if the following limit exists:
$$
lim_zto wfracf(z)-f(w)z-w.
$$
If this limit exists, we denote the value of the limit by $f'(w)$. This should look familiar, as it is the same limit definition for real-valued derivatives. - In the same way that we could create Taylor series for real-valued functions, we can also create Taylor series (center at $z=w$) for complex-valued functions, provided the functions have an infinite number of derivatives at $w$ (sometimes referred to as being holomorphic at $w$):
$$
sum_k=0^inftyfracf^(k)(w)k!(z-w)^k.
$$
If the Taylor series defined above has a positive radius of convergence then we say that $f$ is analytic at $w$. This vocabulary is also used in the case of real-valued Taylor series.
Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent:
$f(z)$ is complex differentiable at $z=w$.
$f(z)$ is holomorphic at $w$. That is $f$ has an infinite number of derivatives at $z=w$. In real analysis this condition is sometimes referred to as being "smooth".
$f(z)$ is analytic at $z=w$. That is it's Taylor series converges to $f(z)$ with some positive radius of convergence.
This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above:
- In real analysis there are a number of functions with only finitely many derivatives. For example, $x^2 sin(1/x)$ has only one derivative at $x=0$, and $f'(x)$ is not even continuous, let alone twice differentiable.
- There are also real-valued functions that are smooth (infinitely-differentiable) yet do not have a convergent Taylor series. An example is the function $e^-1/x^2$ which is smooth for every $x$, but for which every order of derivative at $x=0$ is equal to zero; this means every coefficient in the Maclaurin series is zero, and the radius of convergence is zero as well.
These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point.
Laurent Series:
A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant.
Let $f(z)$ be differentiable at some point $z=w$. Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first "singularity" -- a point where $f(z)$ fails to be differentiable.
Take the complex version of our previous example,
$$
f(z) = frac11+z^2.
$$
This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$, then $f(z)$ is differentiable/smooth/analytic at all values $wneq pm i$. This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$. This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk.
So now should come the question: when exactly does a complex function fail to be differentiable?
Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs:
- The function has a "singularity" (think division by zero).
- The function is defined in terms of the complex conjugate, $barz$. For example, $f(z) = |z|^2 = z,barz$.
- The function involves logarithms or non-integer exponents (these two ideas are actually related).
Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere. This is in stark contrast to the real-valued version $f(x) = |x|$, which is differentiable everywhere except at $x=0$ where there is a "corner".
Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course.
Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate.
Consider the previous example of $f(z) = (1+z^2)^-1$. Again, since the denominator can factor into the product $(z+i)(z-i)$, then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$. In other words if
$$
g(z) = (z+i)f(z) = fracz-i1+z^2,
$$
then $g(z) = z+i$ for all $zneq i$, and $lim_zto ig(z) = 2i$ exists, and is no longer a singularity.
Similarly, if $f(z) = 1/(1+z)^3$, then we again have singularities at $z=pm i$. This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$. Instead, we would need to multiply by three copies to get
$$
g(z) = (z-i)^3 f(z) = frac(z-i)^3(1+z^2)^3,
$$
which again means that $g(z) = (z+i)^3$ for all $zneq i$, and that $lim_zto ig(z) = (2i)^3 = -8i$ exists.
Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles. The order of the pole is the minimum number of liner terms need to remove the singularity.
The real-valued Maclaurin series for $sin x$ is given by
$$
sin x = sum_k=0^infty frac(-1)^k(2k+1)!x^2k+1,
$$
and has a infinite radius of convergence. This means that the complex version
$$
sin z = sum_k=0^infty frac(-1)^k(2k+1)!z^2k+1 = z - frac13!z^3 + frac15!z^5 - cdots
$$
also has an infinite range of convergence (such functions are called entire) and hence no singularities. From here it's easy to see that the function $(sin z)/z$ is analytic as well, with Taylor series
$$
fracsin zz = sum_k=0^infty frac(-1)^k(2k+1)!z^2k = 1 - frac13!z^2 + frac15!z^4 - cdots
$$
However, a function like $(sin z)/z^3$ is not analytic at $z=0$, since dividing $sin z$ by $z^3$ would give us the following expression:
$$
fracsin zz^3 = frac1z^2 - frac13! + frac15!z^2 - cdots
$$
But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series
$$
fracsin zz^3 - frac1z^2 = fracsin z - zz^3 = frac13! + frac15!z^2 - frac17!z^4 + cdots
$$
This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series. A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive:
$$
f(z) := sum_k = -infty^infty c_k z^k
$$
In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points.
A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(sin z)/z^3$. If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity. An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$:
beginalign
sin(1/z) &= sum_k=0^inftyfrac(-1)^k(2k+1)!(1/z)^k \
&= sum_k=0^inftyfrac(-1)^k(2k+1)!z^-k \
&= sum_k=-infty^0frac(-1)^-k(1-2k)!z^k
endalign
These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^-1/x^2$ was so pathological. The Taylor series for $e^z$ is given by
$$
e^z = sum_k=0^inftyfrac1k!z^k
$$
and so
beginalign
e^-1/z^2 &= sum_k=0^inftyfrac1k!(-1/z^2)^k \
&= sum_k=0^inftyfrac1k!(-1)^k z^-2k \
&= sum_k=-infty^0frac1(-k)!(-1)^-k z^2k.
endalign
Hence there is an essential singularity at $z=0$, and so even though the real-valued version is smooth at $x=0$, there is no hope of differentiability in a disk around $z=0$.
answered 9 mins ago
Patch
1,4931125
1,4931125
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2936156%2fare-real-and-complex-analysis-related-in-any-way%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password