How to handle a âself defeatingâ prediction model?
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
4
down vote
favorite
I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.
Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.
What are the strategies for dealing with such a scenario?
Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.
It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?
machine-learning predictive-models
add a comment |Â
up vote
4
down vote
favorite
I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.
Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.
What are the strategies for dealing with such a scenario?
Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.
It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?
machine-learning predictive-models
(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
â rolando2
36 mins ago
add a comment |Â
up vote
4
down vote
favorite
up vote
4
down vote
favorite
I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.
Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.
What are the strategies for dealing with such a scenario?
Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.
It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?
machine-learning predictive-models
I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.
Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.
What are the strategies for dealing with such a scenario?
Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.
It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?
machine-learning predictive-models
machine-learning predictive-models
edited 11 mins ago
asked 4 hours ago
Alex
2,685821
2,685821
(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
â rolando2
36 mins ago
add a comment |Â
(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
â rolando2
36 mins ago
(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
â rolando2
36 mins ago
(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
â rolando2
36 mins ago
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
3
down vote
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.
add a comment |Â
up vote
0
down vote
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.
add a comment |Â
up vote
3
down vote
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.
Presumably you can track when restock events happen. Then it's just a matter of arithmetic to work out when the stock would be depleted had the model not been used to issue a restock alert.
answered 3 hours ago
Sycorax
36.1k694180
36.1k694180
add a comment |Â
add a comment |Â
up vote
0
down vote
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
add a comment |Â
up vote
0
down vote
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
Your scenario bears a lot of resemblance to the Lucas Critique in economics. In machine learning, this is called "dataset shift".
You can overcome it, as @Sycorax says, by explicitly modeling it.
answered 2 mins ago
generic_user
5,54832544
5,54832544
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f373167%2fhow-to-handle-a-self-defeating-prediction-model%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
(+1) In some analogous situations involving higher education, people talk about a model "cannabalizing itself." College officials, using models, award financial aid to achieve certain enrollment- and financial-aid-related goals, only to find that, as a result, eventually prospective students' enrollment decisions are less and less determined by or predictable from the financial aid award.
â rolando2
36 mins ago