How to handle a “self defeating” prediction model?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
4
down vote

favorite












I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.



Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.



What are the strategies for dealing with such a scenario?



Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.



It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?










share|cite|improve this question























  • (+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
    – rolando2
    36 mins ago
















up vote
4
down vote

favorite












I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.



Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.



What are the strategies for dealing with such a scenario?



Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.



It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?










share|cite|improve this question























  • (+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
    – rolando2
    36 mins ago












up vote
4
down vote

favorite









up vote
4
down vote

favorite











I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.



Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.



What are the strategies for dealing with such a scenario?



Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.



It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?










share|cite|improve this question















I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.



Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.



What are the strategies for dealing with such a scenario?



Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.



It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?







machine-learning predictive-models






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 11 mins ago

























asked 4 hours ago









Alex

2,685821




2,685821











  • (+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
    – rolando2
    36 mins ago
















  • (+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
    – rolando2
    36 mins ago















(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
– rolando2
36 mins ago




(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
– rolando2
36 mins ago










2 Answers
2






active

oldest

votes

















up vote
3
down vote













Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.






share|cite|improve this answer



























    up vote
    0
    down vote













    Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".



    You can overcome it, as @Sycorax says, by explicitly modeling it.






    share|cite




















      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "65"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f373167%2fhow-to-handle-a-self-defeating-prediction-model%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      3
      down vote













      Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.






      share|cite|improve this answer
























        up vote
        3
        down vote













        Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.






        share|cite|improve this answer






















          up vote
          3
          down vote










          up vote
          3
          down vote









          Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.






          share|cite|improve this answer












          Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 3 hours ago









          Sycorax

          36.1k694180




          36.1k694180






















              up vote
              0
              down vote













              Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".



              You can overcome it, as @Sycorax says, by explicitly modeling it.






              share|cite
























                up vote
                0
                down vote













                Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".



                You can overcome it, as @Sycorax says, by explicitly modeling it.






                share|cite






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".



                  You can overcome it, as @Sycorax says, by explicitly modeling it.






                  share|cite












                  Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".



                  You can overcome it, as @Sycorax says, by explicitly modeling it.







                  share|cite












                  share|cite



                  share|cite










                  answered 2 mins ago









                  generic_user

                  5,54832544




                  5,54832544



























                       

                      draft saved


                      draft discarded















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f373167%2fhow-to-handle-a-self-defeating-prediction-model%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What does second last employer means? [closed]

                      Installing NextGIS Connect into QGIS 3?

                      One-line joke