Profile Likelihood: why optimize all other parameters while tracing a profile for a partitcular one?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite












Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.



Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.



I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?










share|cite|improve this question



























    up vote
    3
    down vote

    favorite












    Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.



    Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.



    I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?










    share|cite|improve this question























      up vote
      3
      down vote

      favorite









      up vote
      3
      down vote

      favorite











      Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.



      Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.



      I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?










      share|cite|improve this question













      Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.



      Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.



      I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?







      profile-likelihood






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 2 hours ago









      rhody

      1184




      1184




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote













          You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.



          To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.



          In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.






          share|cite|improve this answer






















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f371854%2fprofile-likelihood-why-optimize-all-other-parameters-while-tracing-a-profile-fo%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            2
            down vote













            You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.



            To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.



            In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.






            share|cite|improve this answer


























              up vote
              2
              down vote













              You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.



              To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.



              In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.






              share|cite|improve this answer
























                up vote
                2
                down vote










                up vote
                2
                down vote









                You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.



                To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.



                In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.






                share|cite|improve this answer














                You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.



                To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.



                In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 1 hour ago

























                answered 2 hours ago









                Cliff AB

                12k12159




                12k12159



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f371854%2fprofile-likelihood-why-optimize-all-other-parameters-while-tracing-a-profile-fo%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    Long meetings (6-7 hours a day): Being “babysat” by supervisor

                    Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

                    Confectionery