Example of a continuous function that is difficult to approximate with polynomials

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
11
down vote

favorite
3












For teaching purposes I'd need a continuous function of a single variable that is "difficult" to approximate with polynomials, i.e. one would need very high powers in a power series to "fit" this function well. I intend to show my students the "limits" of what can be achieved with power series.



I thought about concocting something "noisy", but instead of rolling my own I am just wondering whether there is a kind of standard "difficult function" that people use for testing approximation / interpolation algorithms, somewhat similarly to those optimisation test functions that have numerous local minima where naive algorithms get stuck easily.



Apologies if this question is not well-formed; please have mercy on a non-mathematician.







share|cite|improve this question
























    up vote
    11
    down vote

    favorite
    3












    For teaching purposes I'd need a continuous function of a single variable that is "difficult" to approximate with polynomials, i.e. one would need very high powers in a power series to "fit" this function well. I intend to show my students the "limits" of what can be achieved with power series.



    I thought about concocting something "noisy", but instead of rolling my own I am just wondering whether there is a kind of standard "difficult function" that people use for testing approximation / interpolation algorithms, somewhat similarly to those optimisation test functions that have numerous local minima where naive algorithms get stuck easily.



    Apologies if this question is not well-formed; please have mercy on a non-mathematician.







    share|cite|improve this question






















      up vote
      11
      down vote

      favorite
      3









      up vote
      11
      down vote

      favorite
      3






      3





      For teaching purposes I'd need a continuous function of a single variable that is "difficult" to approximate with polynomials, i.e. one would need very high powers in a power series to "fit" this function well. I intend to show my students the "limits" of what can be achieved with power series.



      I thought about concocting something "noisy", but instead of rolling my own I am just wondering whether there is a kind of standard "difficult function" that people use for testing approximation / interpolation algorithms, somewhat similarly to those optimisation test functions that have numerous local minima where naive algorithms get stuck easily.



      Apologies if this question is not well-formed; please have mercy on a non-mathematician.







      share|cite|improve this question












      For teaching purposes I'd need a continuous function of a single variable that is "difficult" to approximate with polynomials, i.e. one would need very high powers in a power series to "fit" this function well. I intend to show my students the "limits" of what can be achieved with power series.



      I thought about concocting something "noisy", but instead of rolling my own I am just wondering whether there is a kind of standard "difficult function" that people use for testing approximation / interpolation algorithms, somewhat similarly to those optimisation test functions that have numerous local minima where naive algorithms get stuck easily.



      Apologies if this question is not well-formed; please have mercy on a non-mathematician.









      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Sep 6 at 13:23









      Laryx Decidua

      19610




      19610




















          5 Answers
          5






          active

          oldest

          votes

















          up vote
          10
          down vote













          Why not simply show the absolute value function?



          Approximation with e.g. Legendre-polynomial expansion works, but pretty badly:



          Sequential approximation of the absolute-value function by polynomials



          Taylor expansion is of course completely useless here, always giving only a linear function, either always decreasing or always increasing (depending on whether the point you expand around is negative or positive).






          share|cite|improve this answer




















          • You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
            – Praveen Chandrashekar
            Sep 7 at 2:26










          • @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
            – leftaroundabout
            Sep 7 at 9:59











          • It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
            – Praveen Chandrashekar
            Sep 7 at 16:02










          • @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
            – leftaroundabout
            Sep 7 at 17:52











          • We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
            – Praveen Chandrashekar
            Sep 8 at 2:42

















          up vote
          8
          down vote













          It's a pathological case, but you can always resort to the Weierstrass monster function. It illustrates a broader point, namely that functions that are not smooth -- e.g., that have a kink -- are difficult to approximate because the interpolation error estimates require the function being interpolated to be differentiable a number of times. In other words, if you don't like the Weierstrass function too much, you can always just choose $|x|$.






          share|cite|improve this answer




















          • Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
            – Laryx Decidua
            Sep 7 at 12:47

















          up vote
          6
          down vote













          Approximation is not only made hard by the function to be approximated but by the interval in which the approximation should be a "good fit". And you should define the measure for a "good fit", i.e. what is the maximum (absolute or relative) error you wish to tolerate?



          For example, you will need a huge number of terms in the Taylor series of $exp(x)$ to have a reasonable fit on the interval $[0,10]$. The same holds for periodic functions. Take $sin(x)$, for example, on the interval $[0,2pi]$. See pictures below...
          enter image description hereenter image description here






          share|cite|improve this answer






















          • I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
            – Praveen Chandrashekar
            Sep 7 at 16:16

















          up vote
          4
          down vote













          Polynomials are surprisingly effective at function approximation [1]. If you have at least Lipschitz continuity, then Chebyshev approximations will converge. Of course, convergence may be slow, and that is the price we pay for dealing with a non-smooth function.



          Today, computers are much faster than the days in which many numerical analysis books were written, and clever algorithms have increased the speed further, so that having to use more terms may not be as bad as it used to be.



          The pathological examples like Weierstrass monster function are interesting from a theoretical point of view, but they are not representative of most real application contexts.



          I think we should be careful not to give a pessimistic view to our students. Ok, there is no method that works for all problems. But we can build adaptive methods, say even based on polynomials, that can deal with such cases. For example, Chebfun [2] can easily approximate $|x|$ by automatically splitting the domain at $x=0$.



          It is important to teach the difficulties in approximation with polynomials, but it also important to tell the students that we can build error estimates and adaptive algorithms that can deal with these issues.



          [1] https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf



          [2] http://www.chebfun.org






          share|cite|improve this answer





























            up vote
            2
            down vote













            $f(x) = frac1x^2+1$ has this Maclaurin series expansion:



            $frac1x^2+1 = 1-x^2+x^4-x^6+x^8-x^10+x^12 - ldots$



            This converges for $-1<x<1$, but it diverges everywhere else. A polynomial approximation around $x=0$ will never get you anywhere near the right answer for $x=2$.






            share|cite|improve this answer








            New contributor




            Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.

















              Your Answer




              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "363"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: false,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscicomp.stackexchange.com%2fquestions%2f30157%2fexample-of-a-continuous-function-that-is-difficult-to-approximate-with-polynomia%23new-answer', 'question_page');

              );

              Post as a guest






























              5 Answers
              5






              active

              oldest

              votes








              5 Answers
              5






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              10
              down vote













              Why not simply show the absolute value function?



              Approximation with e.g. Legendre-polynomial expansion works, but pretty badly:



              Sequential approximation of the absolute-value function by polynomials



              Taylor expansion is of course completely useless here, always giving only a linear function, either always decreasing or always increasing (depending on whether the point you expand around is negative or positive).






              share|cite|improve this answer




















              • You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
                – Praveen Chandrashekar
                Sep 7 at 2:26










              • @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
                – leftaroundabout
                Sep 7 at 9:59











              • It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
                – Praveen Chandrashekar
                Sep 7 at 16:02










              • @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
                – leftaroundabout
                Sep 7 at 17:52











              • We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
                – Praveen Chandrashekar
                Sep 8 at 2:42














              up vote
              10
              down vote













              Why not simply show the absolute value function?



              Approximation with e.g. Legendre-polynomial expansion works, but pretty badly:



              Sequential approximation of the absolute-value function by polynomials



              Taylor expansion is of course completely useless here, always giving only a linear function, either always decreasing or always increasing (depending on whether the point you expand around is negative or positive).






              share|cite|improve this answer




















              • You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
                – Praveen Chandrashekar
                Sep 7 at 2:26










              • @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
                – leftaroundabout
                Sep 7 at 9:59











              • It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
                – Praveen Chandrashekar
                Sep 7 at 16:02










              • @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
                – leftaroundabout
                Sep 7 at 17:52











              • We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
                – Praveen Chandrashekar
                Sep 8 at 2:42












              up vote
              10
              down vote










              up vote
              10
              down vote









              Why not simply show the absolute value function?



              Approximation with e.g. Legendre-polynomial expansion works, but pretty badly:



              Sequential approximation of the absolute-value function by polynomials



              Taylor expansion is of course completely useless here, always giving only a linear function, either always decreasing or always increasing (depending on whether the point you expand around is negative or positive).






              share|cite|improve this answer












              Why not simply show the absolute value function?



              Approximation with e.g. Legendre-polynomial expansion works, but pretty badly:



              Sequential approximation of the absolute-value function by polynomials



              Taylor expansion is of course completely useless here, always giving only a linear function, either always decreasing or always increasing (depending on whether the point you expand around is negative or positive).







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Sep 6 at 22:04









              leftaroundabout

              26115




              26115











              • You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
                – Praveen Chandrashekar
                Sep 7 at 2:26










              • @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
                – leftaroundabout
                Sep 7 at 9:59











              • It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
                – Praveen Chandrashekar
                Sep 7 at 16:02










              • @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
                – leftaroundabout
                Sep 7 at 17:52











              • We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
                – Praveen Chandrashekar
                Sep 8 at 2:42
















              • You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
                – Praveen Chandrashekar
                Sep 7 at 2:26










              • @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
                – leftaroundabout
                Sep 7 at 9:59











              • It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
                – Praveen Chandrashekar
                Sep 7 at 16:02










              • @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
                – leftaroundabout
                Sep 7 at 17:52











              • We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
                – Praveen Chandrashekar
                Sep 8 at 2:42















              You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
              – Praveen Chandrashekar
              Sep 7 at 2:26




              You can interpolate |x| using Chebyshev interpolation, see nbviewer.jupyter.org/github/cpraveen/na/blob/master/… which converges quite fast. E.g., you can change N=2*i in the code to N=15+i and test larger degree. It is not an expansion method but still based on polynomials.
              – Praveen Chandrashekar
              Sep 7 at 2:26












              @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
              – leftaroundabout
              Sep 7 at 9:59





              @PraveenChandrashekar Chebyshev works “better” because it puts more weight on the outer parts of the interval, where the function is smooth. Thus the excessive oscillation is avoided, but saying it approximates the function better is dubious – it does in particular capture the sharp turn at $x=0$ even worse than uniform-discrete-points or $L^2$-minimisation. If your goal is avoiding high-frequency components, better use an integral transformation that properly damps these components.
              – leftaroundabout
              Sep 7 at 9:59













              It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
              – Praveen Chandrashekar
              Sep 7 at 16:02




              It is perfectly fine to have non-uniform points as in Chebyshev interpolation. With degree about 20, it gives a lot more accurate approximation than Legendre that you show in your post. Measure the errors to be more quantitative. You can also do Chebyshev series approximation of |x| which is more accurate than Legendre expansion.
              – Praveen Chandrashekar
              Sep 7 at 16:02












              @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
              – leftaroundabout
              Sep 7 at 17:52





              @PraveenChandrashekar the point is that polynomials are in principle not able to approximate a function like $x mapsto |x|$ properly. There are different methods each of which fails a bit more or less spectacularly, but none of them work well in the sense of “only a few terms give something that could be mistaken for the original function”. If you must use polynomials, you need to consider which kinds of error are more problematic, Legendre and Chebyshev both have their use cases but there's no silver bullet. Ultimately, an approach with e.g. splines is typically more effective.
              – leftaroundabout
              Sep 7 at 17:52













              We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
              – Praveen Chandrashekar
              Sep 8 at 2:42




              We know there is no perfect method. The question is what functions are difficult for polynomials to approximate. So one has to see all possible methods involving polynomials to conclude none of them do a good job. The Legendre is not the best way to approximate |x| and hence it gives a rather false impression that polynomials are too bad for |x|. With Chebyshev you have convergence and far better approximations than Legendre, they dont oscillate so badly as Legendre, though converge slowly near x=0, where function is not smooth enough.
              – Praveen Chandrashekar
              Sep 8 at 2:42










              up vote
              8
              down vote













              It's a pathological case, but you can always resort to the Weierstrass monster function. It illustrates a broader point, namely that functions that are not smooth -- e.g., that have a kink -- are difficult to approximate because the interpolation error estimates require the function being interpolated to be differentiable a number of times. In other words, if you don't like the Weierstrass function too much, you can always just choose $|x|$.






              share|cite|improve this answer




















              • Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
                – Laryx Decidua
                Sep 7 at 12:47














              up vote
              8
              down vote













              It's a pathological case, but you can always resort to the Weierstrass monster function. It illustrates a broader point, namely that functions that are not smooth -- e.g., that have a kink -- are difficult to approximate because the interpolation error estimates require the function being interpolated to be differentiable a number of times. In other words, if you don't like the Weierstrass function too much, you can always just choose $|x|$.






              share|cite|improve this answer




















              • Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
                – Laryx Decidua
                Sep 7 at 12:47












              up vote
              8
              down vote










              up vote
              8
              down vote









              It's a pathological case, but you can always resort to the Weierstrass monster function. It illustrates a broader point, namely that functions that are not smooth -- e.g., that have a kink -- are difficult to approximate because the interpolation error estimates require the function being interpolated to be differentiable a number of times. In other words, if you don't like the Weierstrass function too much, you can always just choose $|x|$.






              share|cite|improve this answer












              It's a pathological case, but you can always resort to the Weierstrass monster function. It illustrates a broader point, namely that functions that are not smooth -- e.g., that have a kink -- are difficult to approximate because the interpolation error estimates require the function being interpolated to be differentiable a number of times. In other words, if you don't like the Weierstrass function too much, you can always just choose $|x|$.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Sep 6 at 15:28









              Wolfgang Bangerth

              36.8k3274




              36.8k3274











              • Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
                – Laryx Decidua
                Sep 7 at 12:47
















              • Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
                – Laryx Decidua
                Sep 7 at 12:47















              Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
              – Laryx Decidua
              Sep 7 at 12:47




              Thanks, this is exactly what I meant by "I thought about something 'noisy'". Very good example IMO.
              – Laryx Decidua
              Sep 7 at 12:47










              up vote
              6
              down vote













              Approximation is not only made hard by the function to be approximated but by the interval in which the approximation should be a "good fit". And you should define the measure for a "good fit", i.e. what is the maximum (absolute or relative) error you wish to tolerate?



              For example, you will need a huge number of terms in the Taylor series of $exp(x)$ to have a reasonable fit on the interval $[0,10]$. The same holds for periodic functions. Take $sin(x)$, for example, on the interval $[0,2pi]$. See pictures below...
              enter image description hereenter image description here






              share|cite|improve this answer






















              • I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
                – Praveen Chandrashekar
                Sep 7 at 16:16














              up vote
              6
              down vote













              Approximation is not only made hard by the function to be approximated but by the interval in which the approximation should be a "good fit". And you should define the measure for a "good fit", i.e. what is the maximum (absolute or relative) error you wish to tolerate?



              For example, you will need a huge number of terms in the Taylor series of $exp(x)$ to have a reasonable fit on the interval $[0,10]$. The same holds for periodic functions. Take $sin(x)$, for example, on the interval $[0,2pi]$. See pictures below...
              enter image description hereenter image description here






              share|cite|improve this answer






















              • I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
                – Praveen Chandrashekar
                Sep 7 at 16:16












              up vote
              6
              down vote










              up vote
              6
              down vote









              Approximation is not only made hard by the function to be approximated but by the interval in which the approximation should be a "good fit". And you should define the measure for a "good fit", i.e. what is the maximum (absolute or relative) error you wish to tolerate?



              For example, you will need a huge number of terms in the Taylor series of $exp(x)$ to have a reasonable fit on the interval $[0,10]$. The same holds for periodic functions. Take $sin(x)$, for example, on the interval $[0,2pi]$. See pictures below...
              enter image description hereenter image description here






              share|cite|improve this answer














              Approximation is not only made hard by the function to be approximated but by the interval in which the approximation should be a "good fit". And you should define the measure for a "good fit", i.e. what is the maximum (absolute or relative) error you wish to tolerate?



              For example, you will need a huge number of terms in the Taylor series of $exp(x)$ to have a reasonable fit on the interval $[0,10]$. The same holds for periodic functions. Take $sin(x)$, for example, on the interval $[0,2pi]$. See pictures below...
              enter image description hereenter image description here







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Sep 6 at 15:48









              nicoguaro♦

              4,00441131




              4,00441131










              answered Sep 6 at 14:01









              GertVdE

              4,7591332




              4,7591332











              • I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
                – Praveen Chandrashekar
                Sep 7 at 16:16
















              • I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
                – Praveen Chandrashekar
                Sep 7 at 16:16















              I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
              – Praveen Chandrashekar
              Sep 7 at 16:16




              I show such examples in my course to make the point that Taylor expansion is not a good method to approximate functions.
              – Praveen Chandrashekar
              Sep 7 at 16:16










              up vote
              4
              down vote













              Polynomials are surprisingly effective at function approximation [1]. If you have at least Lipschitz continuity, then Chebyshev approximations will converge. Of course, convergence may be slow, and that is the price we pay for dealing with a non-smooth function.



              Today, computers are much faster than the days in which many numerical analysis books were written, and clever algorithms have increased the speed further, so that having to use more terms may not be as bad as it used to be.



              The pathological examples like Weierstrass monster function are interesting from a theoretical point of view, but they are not representative of most real application contexts.



              I think we should be careful not to give a pessimistic view to our students. Ok, there is no method that works for all problems. But we can build adaptive methods, say even based on polynomials, that can deal with such cases. For example, Chebfun [2] can easily approximate $|x|$ by automatically splitting the domain at $x=0$.



              It is important to teach the difficulties in approximation with polynomials, but it also important to tell the students that we can build error estimates and adaptive algorithms that can deal with these issues.



              [1] https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf



              [2] http://www.chebfun.org






              share|cite|improve this answer


























                up vote
                4
                down vote













                Polynomials are surprisingly effective at function approximation [1]. If you have at least Lipschitz continuity, then Chebyshev approximations will converge. Of course, convergence may be slow, and that is the price we pay for dealing with a non-smooth function.



                Today, computers are much faster than the days in which many numerical analysis books were written, and clever algorithms have increased the speed further, so that having to use more terms may not be as bad as it used to be.



                The pathological examples like Weierstrass monster function are interesting from a theoretical point of view, but they are not representative of most real application contexts.



                I think we should be careful not to give a pessimistic view to our students. Ok, there is no method that works for all problems. But we can build adaptive methods, say even based on polynomials, that can deal with such cases. For example, Chebfun [2] can easily approximate $|x|$ by automatically splitting the domain at $x=0$.



                It is important to teach the difficulties in approximation with polynomials, but it also important to tell the students that we can build error estimates and adaptive algorithms that can deal with these issues.



                [1] https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf



                [2] http://www.chebfun.org






                share|cite|improve this answer
























                  up vote
                  4
                  down vote










                  up vote
                  4
                  down vote









                  Polynomials are surprisingly effective at function approximation [1]. If you have at least Lipschitz continuity, then Chebyshev approximations will converge. Of course, convergence may be slow, and that is the price we pay for dealing with a non-smooth function.



                  Today, computers are much faster than the days in which many numerical analysis books were written, and clever algorithms have increased the speed further, so that having to use more terms may not be as bad as it used to be.



                  The pathological examples like Weierstrass monster function are interesting from a theoretical point of view, but they are not representative of most real application contexts.



                  I think we should be careful not to give a pessimistic view to our students. Ok, there is no method that works for all problems. But we can build adaptive methods, say even based on polynomials, that can deal with such cases. For example, Chebfun [2] can easily approximate $|x|$ by automatically splitting the domain at $x=0$.



                  It is important to teach the difficulties in approximation with polynomials, but it also important to tell the students that we can build error estimates and adaptive algorithms that can deal with these issues.



                  [1] https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf



                  [2] http://www.chebfun.org






                  share|cite|improve this answer














                  Polynomials are surprisingly effective at function approximation [1]. If you have at least Lipschitz continuity, then Chebyshev approximations will converge. Of course, convergence may be slow, and that is the price we pay for dealing with a non-smooth function.



                  Today, computers are much faster than the days in which many numerical analysis books were written, and clever algorithms have increased the speed further, so that having to use more terms may not be as bad as it used to be.



                  The pathological examples like Weierstrass monster function are interesting from a theoretical point of view, but they are not representative of most real application contexts.



                  I think we should be careful not to give a pessimistic view to our students. Ok, there is no method that works for all problems. But we can build adaptive methods, say even based on polynomials, that can deal with such cases. For example, Chebfun [2] can easily approximate $|x|$ by automatically splitting the domain at $x=0$.



                  It is important to teach the difficulties in approximation with polynomials, but it also important to tell the students that we can build error estimates and adaptive algorithms that can deal with these issues.



                  [1] https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf



                  [2] http://www.chebfun.org







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Sep 8 at 13:37









                  Anton Menshov

                  2,99221255




                  2,99221255










                  answered Sep 8 at 3:09









                  Praveen Chandrashekar

                  794410




                  794410




















                      up vote
                      2
                      down vote













                      $f(x) = frac1x^2+1$ has this Maclaurin series expansion:



                      $frac1x^2+1 = 1-x^2+x^4-x^6+x^8-x^10+x^12 - ldots$



                      This converges for $-1<x<1$, but it diverges everywhere else. A polynomial approximation around $x=0$ will never get you anywhere near the right answer for $x=2$.






                      share|cite|improve this answer








                      New contributor




                      Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.





















                        up vote
                        2
                        down vote













                        $f(x) = frac1x^2+1$ has this Maclaurin series expansion:



                        $frac1x^2+1 = 1-x^2+x^4-x^6+x^8-x^10+x^12 - ldots$



                        This converges for $-1<x<1$, but it diverges everywhere else. A polynomial approximation around $x=0$ will never get you anywhere near the right answer for $x=2$.






                        share|cite|improve this answer








                        New contributor




                        Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.



















                          up vote
                          2
                          down vote










                          up vote
                          2
                          down vote









                          $f(x) = frac1x^2+1$ has this Maclaurin series expansion:



                          $frac1x^2+1 = 1-x^2+x^4-x^6+x^8-x^10+x^12 - ldots$



                          This converges for $-1<x<1$, but it diverges everywhere else. A polynomial approximation around $x=0$ will never get you anywhere near the right answer for $x=2$.






                          share|cite|improve this answer








                          New contributor




                          Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          $f(x) = frac1x^2+1$ has this Maclaurin series expansion:



                          $frac1x^2+1 = 1-x^2+x^4-x^6+x^8-x^10+x^12 - ldots$



                          This converges for $-1<x<1$, but it diverges everywhere else. A polynomial approximation around $x=0$ will never get you anywhere near the right answer for $x=2$.







                          share|cite|improve this answer








                          New contributor




                          Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          share|cite|improve this answer



                          share|cite|improve this answer






                          New contributor




                          Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          answered Sep 7 at 8:35









                          Christopher Wells

                          212




                          212




                          New contributor




                          Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.





                          New contributor





                          Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          Christopher Wells is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.



























                               

                              draft saved


                              draft discarded















































                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscicomp.stackexchange.com%2fquestions%2f30157%2fexample-of-a-continuous-function-that-is-difficult-to-approximate-with-polynomia%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Comments

                              Popular posts from this blog

                              What does second last employer means? [closed]

                              Installing NextGIS Connect into QGIS 3?

                              One-line joke