Bias of MLE of simple PDF

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
1
down vote

favorite
1












Given a sample $x_1, x_2, cdots x_n$ from the pdf:



$$
f(x ; theta) = (theta + 1) x^theta
$$
where $0 < x < 1$ and $theta > -1$ is unknown. What is the bias of the MLE of $theta$?



I've found the MLE to be



$$
hattheta = frac-nsum_i=1^n log(x_i) - 1
$$



but I'm stuck on finding the bias of this estimator. The sum in the denominator makes it hard to take the expected value. I think there is something simple here that I am missing...










share|cite|improve this question



























    up vote
    1
    down vote

    favorite
    1












    Given a sample $x_1, x_2, cdots x_n$ from the pdf:



    $$
    f(x ; theta) = (theta + 1) x^theta
    $$
    where $0 < x < 1$ and $theta > -1$ is unknown. What is the bias of the MLE of $theta$?



    I've found the MLE to be



    $$
    hattheta = frac-nsum_i=1^n log(x_i) - 1
    $$



    but I'm stuck on finding the bias of this estimator. The sum in the denominator makes it hard to take the expected value. I think there is something simple here that I am missing...










    share|cite|improve this question























      up vote
      1
      down vote

      favorite
      1









      up vote
      1
      down vote

      favorite
      1






      1





      Given a sample $x_1, x_2, cdots x_n$ from the pdf:



      $$
      f(x ; theta) = (theta + 1) x^theta
      $$
      where $0 < x < 1$ and $theta > -1$ is unknown. What is the bias of the MLE of $theta$?



      I've found the MLE to be



      $$
      hattheta = frac-nsum_i=1^n log(x_i) - 1
      $$



      but I'm stuck on finding the bias of this estimator. The sum in the denominator makes it hard to take the expected value. I think there is something simple here that I am missing...










      share|cite|improve this question













      Given a sample $x_1, x_2, cdots x_n$ from the pdf:



      $$
      f(x ; theta) = (theta + 1) x^theta
      $$
      where $0 < x < 1$ and $theta > -1$ is unknown. What is the bias of the MLE of $theta$?



      I've found the MLE to be



      $$
      hattheta = frac-nsum_i=1^n log(x_i) - 1
      $$



      but I'm stuck on finding the bias of this estimator. The sum in the denominator makes it hard to take the expected value. I think there is something simple here that I am missing...







      self-study mathematical-statistics maximum-likelihood bias






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 1 hour ago









      bill_e

      1,48211326




      1,48211326




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          3
          down vote



          accepted










          Make the substitution
          $$
          Y_i = -log X_i.
          $$
          It is easy to show that $Y_i$ has density
          $$
          f(y ; theta) = (theta + 1) e^-(theta + 1) y I(y > 0).
          $$
          so $Y_i sim operatornameExponential(theta + 1)$. It follows that $sum_i -log (X_i) sim operatornameGamma(n, theta + 1)$. Recall that if $Z sim operatornameGamma(alpha, beta)$ then $E Z^-1 = fracbetaalpha - 1$. Therefore
          $$
          E_theta(widehat theta) = ... = fracntheta + 1n - 1
          $$
          and the bias is then easily shown to be $(theta + 1) / (n - 1)$.






          share|cite|improve this answer




















          • "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
            – bill_e
            1 hour ago







          • 2




            @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
            – guy
            59 mins ago







          • 1




            @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
            – guy
            55 mins ago










          • Ahh that makes sense, thanks!
            – bill_e
            39 mins ago










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "65"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f367186%2fbias-of-mle-of-simple-pdf%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          3
          down vote



          accepted










          Make the substitution
          $$
          Y_i = -log X_i.
          $$
          It is easy to show that $Y_i$ has density
          $$
          f(y ; theta) = (theta + 1) e^-(theta + 1) y I(y > 0).
          $$
          so $Y_i sim operatornameExponential(theta + 1)$. It follows that $sum_i -log (X_i) sim operatornameGamma(n, theta + 1)$. Recall that if $Z sim operatornameGamma(alpha, beta)$ then $E Z^-1 = fracbetaalpha - 1$. Therefore
          $$
          E_theta(widehat theta) = ... = fracntheta + 1n - 1
          $$
          and the bias is then easily shown to be $(theta + 1) / (n - 1)$.






          share|cite|improve this answer




















          • "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
            – bill_e
            1 hour ago







          • 2




            @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
            – guy
            59 mins ago







          • 1




            @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
            – guy
            55 mins ago










          • Ahh that makes sense, thanks!
            – bill_e
            39 mins ago














          up vote
          3
          down vote



          accepted










          Make the substitution
          $$
          Y_i = -log X_i.
          $$
          It is easy to show that $Y_i$ has density
          $$
          f(y ; theta) = (theta + 1) e^-(theta + 1) y I(y > 0).
          $$
          so $Y_i sim operatornameExponential(theta + 1)$. It follows that $sum_i -log (X_i) sim operatornameGamma(n, theta + 1)$. Recall that if $Z sim operatornameGamma(alpha, beta)$ then $E Z^-1 = fracbetaalpha - 1$. Therefore
          $$
          E_theta(widehat theta) = ... = fracntheta + 1n - 1
          $$
          and the bias is then easily shown to be $(theta + 1) / (n - 1)$.






          share|cite|improve this answer




















          • "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
            – bill_e
            1 hour ago







          • 2




            @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
            – guy
            59 mins ago







          • 1




            @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
            – guy
            55 mins ago










          • Ahh that makes sense, thanks!
            – bill_e
            39 mins ago












          up vote
          3
          down vote



          accepted







          up vote
          3
          down vote



          accepted






          Make the substitution
          $$
          Y_i = -log X_i.
          $$
          It is easy to show that $Y_i$ has density
          $$
          f(y ; theta) = (theta + 1) e^-(theta + 1) y I(y > 0).
          $$
          so $Y_i sim operatornameExponential(theta + 1)$. It follows that $sum_i -log (X_i) sim operatornameGamma(n, theta + 1)$. Recall that if $Z sim operatornameGamma(alpha, beta)$ then $E Z^-1 = fracbetaalpha - 1$. Therefore
          $$
          E_theta(widehat theta) = ... = fracntheta + 1n - 1
          $$
          and the bias is then easily shown to be $(theta + 1) / (n - 1)$.






          share|cite|improve this answer












          Make the substitution
          $$
          Y_i = -log X_i.
          $$
          It is easy to show that $Y_i$ has density
          $$
          f(y ; theta) = (theta + 1) e^-(theta + 1) y I(y > 0).
          $$
          so $Y_i sim operatornameExponential(theta + 1)$. It follows that $sum_i -log (X_i) sim operatornameGamma(n, theta + 1)$. Recall that if $Z sim operatornameGamma(alpha, beta)$ then $E Z^-1 = fracbetaalpha - 1$. Therefore
          $$
          E_theta(widehat theta) = ... = fracntheta + 1n - 1
          $$
          and the bias is then easily shown to be $(theta + 1) / (n - 1)$.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 1 hour ago









          guy

          3,95811336




          3,95811336











          • "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
            – bill_e
            1 hour ago







          • 2




            @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
            – guy
            59 mins ago







          • 1




            @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
            – guy
            55 mins ago










          • Ahh that makes sense, thanks!
            – bill_e
            39 mins ago
















          • "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
            – bill_e
            1 hour ago







          • 2




            @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
            – guy
            59 mins ago







          • 1




            @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
            – guy
            55 mins ago










          • Ahh that makes sense, thanks!
            – bill_e
            39 mins ago















          "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
          – bill_e
          1 hour ago





          "it's easy to show..." Could you elaborate here a little bit? What was your thought process here? Is the original pdf of the x's some sort of standard distribution (with a known identity for Y = -logX) I ought to know? Or did you intuit trying the Y=-logX transformation and then worked out the distribution for Y from scratch?
          – bill_e
          1 hour ago





          2




          2




          @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
          – guy
          59 mins ago





          @bill_e My logic was that, because this looks like a homework problem, it must be the case that $sum_i -log (X_i)$ has a simple distribution. Hence $-log (X_i)$ probably has some kind of gamma distribution, because if it didn't then the question would be difficult (intro Math Stat courses learn that sums of iid gammas are gammas, and that is the only type of identity of this sort that seems like it might be useful). Anyway, the original pdf is a $operatornameBeta(theta+1, 1)$, but that fact didn't motivate the approach I took.
          – guy
          59 mins ago





          1




          1




          @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
          – guy
          55 mins ago




          @bill_e Also, I worked out the density by doing a change of variables (i.e., substitue $e^-y$ for $x$ and multiply by $|frac d dy e^-y|$).
          – guy
          55 mins ago












          Ahh that makes sense, thanks!
          – bill_e
          39 mins ago




          Ahh that makes sense, thanks!
          – bill_e
          39 mins ago

















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f367186%2fbias-of-mle-of-simple-pdf%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          Long meetings (6-7 hours a day): Being “babysat” by supervisor

          Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

          Confectionery