Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite












Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?










share|cite|improve this question









New contributor




ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    up vote
    3
    down vote

    favorite












    Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?










    share|cite|improve this question









    New contributor




    ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      3
      down vote

      favorite









      up vote
      3
      down vote

      favorite











      Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?










      share|cite|improve this question









      New contributor




      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?







      correlation normal-distribution independence joint-distribution






      share|cite|improve this question









      New contributor




      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question









      New contributor




      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question








      edited 2 hours ago









      Michael Hardy

      3,0101330




      3,0101330






      New contributor




      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 3 hours ago









      ColorStatistics

      454




      454




      New contributor




      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      ColorStatistics is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          4
          down vote



          accepted










          The the joint probability density function (pdf) of bivariate normal distribution is:
          $$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$



          where



          $$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
          When $rho = 0$,
          $$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
          & = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$
          .



          So they are independent.






          share|cite|improve this answer






















          • I was two lines slower than you! (+1)
            – jbowman
            2 hours ago






          • 1




            Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
            – ColorStatistics
            1 hour ago

















          up vote
          0
          down vote













          Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:



          • For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.


          • There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$


          That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .



          If the second one it true, then $operatornamecov(X,Y) = ac + bd.$



          If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$



          Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$






          share|cite|improve this answer




















            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );






            ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.









             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f376229%2funcorrelatedness-joint-normality-independence-why-intuition-and-mechanics%23new-answer', 'question_page');

            );

            Post as a guest






























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            4
            down vote



            accepted










            The the joint probability density function (pdf) of bivariate normal distribution is:
            $$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$



            where



            $$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
            When $rho = 0$,
            $$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
            & = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$
            .



            So they are independent.






            share|cite|improve this answer






















            • I was two lines slower than you! (+1)
              – jbowman
              2 hours ago






            • 1




              Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
              – ColorStatistics
              1 hour ago














            up vote
            4
            down vote



            accepted










            The the joint probability density function (pdf) of bivariate normal distribution is:
            $$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$



            where



            $$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
            When $rho = 0$,
            $$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
            & = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$
            .



            So they are independent.






            share|cite|improve this answer






















            • I was two lines slower than you! (+1)
              – jbowman
              2 hours ago






            • 1




              Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
              – ColorStatistics
              1 hour ago












            up vote
            4
            down vote



            accepted







            up vote
            4
            down vote



            accepted






            The the joint probability density function (pdf) of bivariate normal distribution is:
            $$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$



            where



            $$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
            When $rho = 0$,
            $$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
            & = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$
            .



            So they are independent.






            share|cite|improve this answer














            The the joint probability density function (pdf) of bivariate normal distribution is:
            $$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$



            where



            $$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
            When $rho = 0$,
            $$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
            & = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$
            .



            So they are independent.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 1 hour ago









            Michael Hardy

            3,0101330




            3,0101330










            answered 2 hours ago









            a_statistician

            3,220139




            3,220139











            • I was two lines slower than you! (+1)
              – jbowman
              2 hours ago






            • 1




              Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
              – ColorStatistics
              1 hour ago
















            • I was two lines slower than you! (+1)
              – jbowman
              2 hours ago






            • 1




              Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
              – ColorStatistics
              1 hour ago















            I was two lines slower than you! (+1)
            – jbowman
            2 hours ago




            I was two lines slower than you! (+1)
            – jbowman
            2 hours ago




            1




            1




            Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
            – ColorStatistics
            1 hour ago




            Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
            – ColorStatistics
            1 hour ago












            up vote
            0
            down vote













            Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:



            • For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.


            • There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$


            That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .



            If the second one it true, then $operatornamecov(X,Y) = ac + bd.$



            If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$



            Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$






            share|cite|improve this answer
























              up vote
              0
              down vote













              Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:



              • For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.


              • There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$


              That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .



              If the second one it true, then $operatornamecov(X,Y) = ac + bd.$



              If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$



              Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$






              share|cite|improve this answer






















                up vote
                0
                down vote










                up vote
                0
                down vote









                Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:



                • For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.


                • There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$


                That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .



                If the second one it true, then $operatornamecov(X,Y) = ac + bd.$



                If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$



                Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$






                share|cite|improve this answer












                Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:



                • For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.


                • There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$


                That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .



                If the second one it true, then $operatornamecov(X,Y) = ac + bd.$



                If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$



                Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 1 hour ago









                Michael Hardy

                3,0101330




                3,0101330




















                    ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.









                     

                    draft saved


                    draft discarded


















                    ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.












                    ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.











                    ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.













                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f376229%2funcorrelatedness-joint-normality-independence-why-intuition-and-mechanics%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What does second last employer means? [closed]

                    List of Gilmore Girls characters

                    Confectionery