Calculating Fisher Information for Bernoulli rv

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite
1












Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?










share|cite|improve this question



















  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    4 hours ago















up vote
4
down vote

favorite
1












Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?










share|cite|improve this question



















  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    4 hours ago













up vote
4
down vote

favorite
1









up vote
4
down vote

favorite
1






1





Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?










share|cite|improve this question















Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?







statistics probability-distributions expected-value






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 4 hours ago









StubbornAtom

4,09511136




4,09511136










asked 5 hours ago









DavidS

1348




1348







  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    4 hours ago













  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    4 hours ago








1




1




$+1$ for showing your work to derive the correct $I_X(p)$
– Ahmad Bazzi
4 hours ago





$+1$ for showing your work to derive the correct $I_X(p)$
– Ahmad Bazzi
4 hours ago











2 Answers
2






active

oldest

votes

















up vote
4
down vote



accepted










beginequation
I_X(p)=E_p left[fracx^2p^2right]-2E_p left[ fracx - x^2p(1-p) right] + E_p left[ fracx^2 - 2x + 1(1-p)^2right] tag1
endequation
For a Bernoulli RV, we know
beginalign
E(x) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
E(x^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p
endalign
Now, replace in $(1)$, we get
beginequation
I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
=
frac1p-fracp-1(1-p)^2
=
frac1p
-
frac1p-1
=
frac1p(p-1)
endequation






share|cite|improve this answer






















  • thanks @MichaelHardy
    – Ahmad Bazzi
    2 hours ago










  • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
    – DavidS
    2 hours ago






  • 1




    Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
    – Ahmad Bazzi
    2 hours ago







  • 1




    The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
    – Alejandro Nasif Salum
    2 hours ago










  • Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
    – Alejandro Nasif Salum
    2 hours ago


















up vote
1
down vote













Actually, the Fisher information of $X$ about $p$ is
$$I_X(p)=E_pleft[left(fracddplog f(X|p)right)^2right],$$
that is
$$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
$$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



The expectation is there for the fact that $X$ is a random variable. So, for instance:
$$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
$$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
or by the observation that $Xsim Be(p)implies X^nsim Be(p)$ as well. Then you can go on with the remaining terms.




Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
$$I_X(p)=-E_pleft(fracd^2dp^2log f(X|p)right),$$
and many times you'll get simpler expressions. In this case, for instance, you get
$$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
$$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2right)=frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
$$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
as desired.






share|cite|improve this answer






















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2919044%2fcalculating-fisher-information-for-bernoulli-rv%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    4
    down vote



    accepted










    beginequation
    I_X(p)=E_p left[fracx^2p^2right]-2E_p left[ fracx - x^2p(1-p) right] + E_p left[ fracx^2 - 2x + 1(1-p)^2right] tag1
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(x) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(x^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1)
    endequation






    share|cite|improve this answer






















    • thanks @MichaelHardy
      – Ahmad Bazzi
      2 hours ago










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      2 hours ago






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      2 hours ago







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      2 hours ago










    • Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
      – Alejandro Nasif Salum
      2 hours ago















    up vote
    4
    down vote



    accepted










    beginequation
    I_X(p)=E_p left[fracx^2p^2right]-2E_p left[ fracx - x^2p(1-p) right] + E_p left[ fracx^2 - 2x + 1(1-p)^2right] tag1
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(x) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(x^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1)
    endequation






    share|cite|improve this answer






















    • thanks @MichaelHardy
      – Ahmad Bazzi
      2 hours ago










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      2 hours ago






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      2 hours ago







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      2 hours ago










    • Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
      – Alejandro Nasif Salum
      2 hours ago













    up vote
    4
    down vote



    accepted







    up vote
    4
    down vote



    accepted






    beginequation
    I_X(p)=E_p left[fracx^2p^2right]-2E_p left[ fracx - x^2p(1-p) right] + E_p left[ fracx^2 - 2x + 1(1-p)^2right] tag1
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(x) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(x^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1)
    endequation






    share|cite|improve this answer














    beginequation
    I_X(p)=E_p left[fracx^2p^2right]-2E_p left[ fracx - x^2p(1-p) right] + E_p left[ fracx^2 - 2x + 1(1-p)^2right] tag1
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(x) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(x^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1)
    endequation







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited 1 hour ago









    Michael Hardy

    206k23187466




    206k23187466










    answered 4 hours ago









    Ahmad Bazzi

    5,7241623




    5,7241623











    • thanks @MichaelHardy
      – Ahmad Bazzi
      2 hours ago










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      2 hours ago






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      2 hours ago







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      2 hours ago










    • Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
      – Alejandro Nasif Salum
      2 hours ago

















    • thanks @MichaelHardy
      – Ahmad Bazzi
      2 hours ago










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      2 hours ago






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      2 hours ago







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      2 hours ago










    • Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
      – Alejandro Nasif Salum
      2 hours ago
















    thanks @MichaelHardy
    – Ahmad Bazzi
    2 hours ago




    thanks @MichaelHardy
    – Ahmad Bazzi
    2 hours ago












    I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
    – DavidS
    2 hours ago




    I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
    – DavidS
    2 hours ago




    1




    1




    Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
    – Ahmad Bazzi
    2 hours ago





    Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
    – Ahmad Bazzi
    2 hours ago





    1




    1




    The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
    – Alejandro Nasif Salum
    2 hours ago




    The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
    – Alejandro Nasif Salum
    2 hours ago












    Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
    – Alejandro Nasif Salum
    2 hours ago





    Also, if you put $I_X$, then it is the information of the random variable $X$ which has to be calculated, and so the formula should be in terms of $X$, not $x$. Otherwise, you're finding $I_x$.
    – Alejandro Nasif Salum
    2 hours ago











    up vote
    1
    down vote













    Actually, the Fisher information of $X$ about $p$ is
    $$I_X(p)=E_pleft[left(fracddplog f(X|p)right)^2right],$$
    that is
    $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



    I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
    $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



    The expectation is there for the fact that $X$ is a random variable. So, for instance:
    $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



    Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
    $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
    or by the observation that $Xsim Be(p)implies X^nsim Be(p)$ as well. Then you can go on with the remaining terms.




    Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
    $$I_X(p)=-E_pleft(fracd^2dp^2log f(X|p)right),$$
    and many times you'll get simpler expressions. In this case, for instance, you get
    $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
    $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2right)=frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
    $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
    as desired.






    share|cite|improve this answer


























      up vote
      1
      down vote













      Actually, the Fisher information of $X$ about $p$ is
      $$I_X(p)=E_pleft[left(fracddplog f(X|p)right)^2right],$$
      that is
      $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



      I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
      $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



      The expectation is there for the fact that $X$ is a random variable. So, for instance:
      $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



      Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
      $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
      or by the observation that $Xsim Be(p)implies X^nsim Be(p)$ as well. Then you can go on with the remaining terms.




      Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
      $$I_X(p)=-E_pleft(fracd^2dp^2log f(X|p)right),$$
      and many times you'll get simpler expressions. In this case, for instance, you get
      $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
      $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2right)=frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
      $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
      as desired.






      share|cite|improve this answer
























        up vote
        1
        down vote










        up vote
        1
        down vote









        Actually, the Fisher information of $X$ about $p$ is
        $$I_X(p)=E_pleft[left(fracddplog f(X|p)right)^2right],$$
        that is
        $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



        I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
        $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



        The expectation is there for the fact that $X$ is a random variable. So, for instance:
        $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



        Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
        $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
        or by the observation that $Xsim Be(p)implies X^nsim Be(p)$ as well. Then you can go on with the remaining terms.




        Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
        $$I_X(p)=-E_pleft(fracd^2dp^2log f(X|p)right),$$
        and many times you'll get simpler expressions. In this case, for instance, you get
        $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
        $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2right)=frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
        $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
        as desired.






        share|cite|improve this answer














        Actually, the Fisher information of $X$ about $p$ is
        $$I_X(p)=E_pleft[left(fracddplog f(X|p)right)^2right],$$
        that is
        $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



        I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
        $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



        The expectation is there for the fact that $X$ is a random variable. So, for instance:
        $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



        Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
        $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
        or by the observation that $Xsim Be(p)implies X^nsim Be(p)$ as well. Then you can go on with the remaining terms.




        Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
        $$I_X(p)=-E_pleft(fracd^2dp^2log f(X|p)right),$$
        and many times you'll get simpler expressions. In this case, for instance, you get
        $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
        $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2right)=frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
        $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
        as desired.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 4 hours ago

























        answered 5 hours ago









        Alejandro Nasif Salum

        3,309117




        3,309117



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2919044%2fcalculating-fisher-information-for-bernoulli-rv%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            Long meetings (6-7 hours a day): Being “babysat” by supervisor

            Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

            Confectionery