What is the difference between repeatability, replicability and reproducibility?
Clash Royale CLAN TAG#URR8PPP
up vote
5
down vote
favorite
I have seen many instances where authors used the term "reproducibility" and "replicability" interchangeably in the social and behavioural sciences. Sometimes, they distinguish between the "repeatability" of experiments (same measurand/same measurement conditions) and "replicability" (same measurand/different conditions). If the three concepts are differentiated than, in most cases, there seems to be an inherent hierarchy between the three concepts:repeatability < reproducibility < replicability
Where a successful replication means that the same finding has been achieved with different data (or sometimes methods) and reproducibility means that it is possible to get the same results given the data and analytical means from the original study.
However, it occurs to me that at least in computer science this seems to be different.[1]
I am not aware of the situation in other fields. Therefore, I am curious which definitions of repeatability, reproducibility and replicability are used in other disciplines.
What are the definitions most commonly associated to repeatability, reproducibility and replicability in your field?
Are the definitions the same but the concepts have substantially different meanings between fields e.g. because pseudo-random numbers generated by computer experiments are different from true randomness in biological experiments?
reproducible-research
add a comment |Â
up vote
5
down vote
favorite
I have seen many instances where authors used the term "reproducibility" and "replicability" interchangeably in the social and behavioural sciences. Sometimes, they distinguish between the "repeatability" of experiments (same measurand/same measurement conditions) and "replicability" (same measurand/different conditions). If the three concepts are differentiated than, in most cases, there seems to be an inherent hierarchy between the three concepts:repeatability < reproducibility < replicability
Where a successful replication means that the same finding has been achieved with different data (or sometimes methods) and reproducibility means that it is possible to get the same results given the data and analytical means from the original study.
However, it occurs to me that at least in computer science this seems to be different.[1]
I am not aware of the situation in other fields. Therefore, I am curious which definitions of repeatability, reproducibility and replicability are used in other disciplines.
What are the definitions most commonly associated to repeatability, reproducibility and replicability in your field?
Are the definitions the same but the concepts have substantially different meanings between fields e.g. because pseudo-random numbers generated by computer experiments are different from true randomness in biological experiments?
reproducible-research
add a comment |Â
up vote
5
down vote
favorite
up vote
5
down vote
favorite
I have seen many instances where authors used the term "reproducibility" and "replicability" interchangeably in the social and behavioural sciences. Sometimes, they distinguish between the "repeatability" of experiments (same measurand/same measurement conditions) and "replicability" (same measurand/different conditions). If the three concepts are differentiated than, in most cases, there seems to be an inherent hierarchy between the three concepts:repeatability < reproducibility < replicability
Where a successful replication means that the same finding has been achieved with different data (or sometimes methods) and reproducibility means that it is possible to get the same results given the data and analytical means from the original study.
However, it occurs to me that at least in computer science this seems to be different.[1]
I am not aware of the situation in other fields. Therefore, I am curious which definitions of repeatability, reproducibility and replicability are used in other disciplines.
What are the definitions most commonly associated to repeatability, reproducibility and replicability in your field?
Are the definitions the same but the concepts have substantially different meanings between fields e.g. because pseudo-random numbers generated by computer experiments are different from true randomness in biological experiments?
reproducible-research
I have seen many instances where authors used the term "reproducibility" and "replicability" interchangeably in the social and behavioural sciences. Sometimes, they distinguish between the "repeatability" of experiments (same measurand/same measurement conditions) and "replicability" (same measurand/different conditions). If the three concepts are differentiated than, in most cases, there seems to be an inherent hierarchy between the three concepts:repeatability < reproducibility < replicability
Where a successful replication means that the same finding has been achieved with different data (or sometimes methods) and reproducibility means that it is possible to get the same results given the data and analytical means from the original study.
However, it occurs to me that at least in computer science this seems to be different.[1]
I am not aware of the situation in other fields. Therefore, I am curious which definitions of repeatability, reproducibility and replicability are used in other disciplines.
What are the definitions most commonly associated to repeatability, reproducibility and replicability in your field?
Are the definitions the same but the concepts have substantially different meanings between fields e.g. because pseudo-random numbers generated by computer experiments are different from true randomness in biological experiments?
reproducible-research
reproducible-research
edited 3 hours ago
asked 3 hours ago
non-numeric_argument
1,024920
1,024920
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
The American Statistical Association (ASA) has developed Reproducible Research Recommendations. The purpose of these recommendations are to create transparent science that can be recreated independently of the data creators and analyzers. These recommendations have been motivated by several high profiles examples of studies being either retracted or not repeatable. The ASA includes definitions for Reproducibility and Replicability:
Reproducibility: A study is reproducible if you can take the original data and the computer code used to analyze the data and reproduce all of the numerical findings from the study. This may initially sound like a trivial task but experience has shown that itâÂÂs not always easy to achieve this seemingly minimal standard.
Replicability: This is the act of repeating an entire study, independently of the original
investigator without the use of original data (but generally using the same methods).
I do not know repeatability fits into these definitions or if it has a formal definition. Also, this hierarchy seems to differ from yours. Reproducibility is using exiting data and recreating the same results using the described methods. Replicability is conducting a new experiment and reaching the same conclusions.
On a personal note, I have tried to reproduce studies while working with a statistician and most life science journal articles do not include enough details to exactly recreate analysis unless they include their script (e.g., Python, SAS, or R code used to analyze the data). We were trying to recreate simple linear regressions and ANOVAs to find case studies for undergraduate stats courses, but often got different regression coefficients and test statistics. For example, people often make assumptions about NA
values or transformations that are not described in their formal writeup.
For examples of replicability problems, search for psychology reproducibility problems. Also, notice the inconsistent use of the r terms here.
For examples of reproducibility problems, this lecture describes how a forensic statistician uncovered the Duke Cancer Scandal a few years back. Although outright fraud was occurring, there were other serious issues with the data analyzes.
add a comment |Â
up vote
1
down vote
In computaitonal research I would argue that something is "reproducible" if I can rerun your analysis with your code and get the same answer as you did. This might sound trivial, with differences really only being connected to pseudo-random number generation, but in fact this is far from the truth.
- Many computational studies do not provide their code.
- Even where they do, recreating the exact compilation/execution environment is next to impossible.
- Are you using the same compiler and version
- Are you using the same numeric libraries (BLAS/MKL) and the same versions
- Does your system use the same precision in its floating point calculations
- etc etc etc
You give me your code and enough information for me to produce and identical environment or (even better) your code is insenstive the environment, then your research is Repeatable.
If you describe your study sufficiently well that I can re-implement your study from scratch, without looking at your code and still get the same answer, then it is Reproducible
If I can arrive at the same conclusions as you, just from a description of its aims, then it is Replicable.
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
The American Statistical Association (ASA) has developed Reproducible Research Recommendations. The purpose of these recommendations are to create transparent science that can be recreated independently of the data creators and analyzers. These recommendations have been motivated by several high profiles examples of studies being either retracted or not repeatable. The ASA includes definitions for Reproducibility and Replicability:
Reproducibility: A study is reproducible if you can take the original data and the computer code used to analyze the data and reproduce all of the numerical findings from the study. This may initially sound like a trivial task but experience has shown that itâÂÂs not always easy to achieve this seemingly minimal standard.
Replicability: This is the act of repeating an entire study, independently of the original
investigator without the use of original data (but generally using the same methods).
I do not know repeatability fits into these definitions or if it has a formal definition. Also, this hierarchy seems to differ from yours. Reproducibility is using exiting data and recreating the same results using the described methods. Replicability is conducting a new experiment and reaching the same conclusions.
On a personal note, I have tried to reproduce studies while working with a statistician and most life science journal articles do not include enough details to exactly recreate analysis unless they include their script (e.g., Python, SAS, or R code used to analyze the data). We were trying to recreate simple linear regressions and ANOVAs to find case studies for undergraduate stats courses, but often got different regression coefficients and test statistics. For example, people often make assumptions about NA
values or transformations that are not described in their formal writeup.
For examples of replicability problems, search for psychology reproducibility problems. Also, notice the inconsistent use of the r terms here.
For examples of reproducibility problems, this lecture describes how a forensic statistician uncovered the Duke Cancer Scandal a few years back. Although outright fraud was occurring, there were other serious issues with the data analyzes.
add a comment |Â
up vote
2
down vote
The American Statistical Association (ASA) has developed Reproducible Research Recommendations. The purpose of these recommendations are to create transparent science that can be recreated independently of the data creators and analyzers. These recommendations have been motivated by several high profiles examples of studies being either retracted or not repeatable. The ASA includes definitions for Reproducibility and Replicability:
Reproducibility: A study is reproducible if you can take the original data and the computer code used to analyze the data and reproduce all of the numerical findings from the study. This may initially sound like a trivial task but experience has shown that itâÂÂs not always easy to achieve this seemingly minimal standard.
Replicability: This is the act of repeating an entire study, independently of the original
investigator without the use of original data (but generally using the same methods).
I do not know repeatability fits into these definitions or if it has a formal definition. Also, this hierarchy seems to differ from yours. Reproducibility is using exiting data and recreating the same results using the described methods. Replicability is conducting a new experiment and reaching the same conclusions.
On a personal note, I have tried to reproduce studies while working with a statistician and most life science journal articles do not include enough details to exactly recreate analysis unless they include their script (e.g., Python, SAS, or R code used to analyze the data). We were trying to recreate simple linear regressions and ANOVAs to find case studies for undergraduate stats courses, but often got different regression coefficients and test statistics. For example, people often make assumptions about NA
values or transformations that are not described in their formal writeup.
For examples of replicability problems, search for psychology reproducibility problems. Also, notice the inconsistent use of the r terms here.
For examples of reproducibility problems, this lecture describes how a forensic statistician uncovered the Duke Cancer Scandal a few years back. Although outright fraud was occurring, there were other serious issues with the data analyzes.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
The American Statistical Association (ASA) has developed Reproducible Research Recommendations. The purpose of these recommendations are to create transparent science that can be recreated independently of the data creators and analyzers. These recommendations have been motivated by several high profiles examples of studies being either retracted or not repeatable. The ASA includes definitions for Reproducibility and Replicability:
Reproducibility: A study is reproducible if you can take the original data and the computer code used to analyze the data and reproduce all of the numerical findings from the study. This may initially sound like a trivial task but experience has shown that itâÂÂs not always easy to achieve this seemingly minimal standard.
Replicability: This is the act of repeating an entire study, independently of the original
investigator without the use of original data (but generally using the same methods).
I do not know repeatability fits into these definitions or if it has a formal definition. Also, this hierarchy seems to differ from yours. Reproducibility is using exiting data and recreating the same results using the described methods. Replicability is conducting a new experiment and reaching the same conclusions.
On a personal note, I have tried to reproduce studies while working with a statistician and most life science journal articles do not include enough details to exactly recreate analysis unless they include their script (e.g., Python, SAS, or R code used to analyze the data). We were trying to recreate simple linear regressions and ANOVAs to find case studies for undergraduate stats courses, but often got different regression coefficients and test statistics. For example, people often make assumptions about NA
values or transformations that are not described in their formal writeup.
For examples of replicability problems, search for psychology reproducibility problems. Also, notice the inconsistent use of the r terms here.
For examples of reproducibility problems, this lecture describes how a forensic statistician uncovered the Duke Cancer Scandal a few years back. Although outright fraud was occurring, there were other serious issues with the data analyzes.
The American Statistical Association (ASA) has developed Reproducible Research Recommendations. The purpose of these recommendations are to create transparent science that can be recreated independently of the data creators and analyzers. These recommendations have been motivated by several high profiles examples of studies being either retracted or not repeatable. The ASA includes definitions for Reproducibility and Replicability:
Reproducibility: A study is reproducible if you can take the original data and the computer code used to analyze the data and reproduce all of the numerical findings from the study. This may initially sound like a trivial task but experience has shown that itâÂÂs not always easy to achieve this seemingly minimal standard.
Replicability: This is the act of repeating an entire study, independently of the original
investigator without the use of original data (but generally using the same methods).
I do not know repeatability fits into these definitions or if it has a formal definition. Also, this hierarchy seems to differ from yours. Reproducibility is using exiting data and recreating the same results using the described methods. Replicability is conducting a new experiment and reaching the same conclusions.
On a personal note, I have tried to reproduce studies while working with a statistician and most life science journal articles do not include enough details to exactly recreate analysis unless they include their script (e.g., Python, SAS, or R code used to analyze the data). We were trying to recreate simple linear regressions and ANOVAs to find case studies for undergraduate stats courses, but often got different regression coefficients and test statistics. For example, people often make assumptions about NA
values or transformations that are not described in their formal writeup.
For examples of replicability problems, search for psychology reproducibility problems. Also, notice the inconsistent use of the r terms here.
For examples of reproducibility problems, this lecture describes how a forensic statistician uncovered the Duke Cancer Scandal a few years back. Although outright fraud was occurring, there were other serious issues with the data analyzes.
answered 2 hours ago
Richard Erickson
3,94221729
3,94221729
add a comment |Â
add a comment |Â
up vote
1
down vote
In computaitonal research I would argue that something is "reproducible" if I can rerun your analysis with your code and get the same answer as you did. This might sound trivial, with differences really only being connected to pseudo-random number generation, but in fact this is far from the truth.
- Many computational studies do not provide their code.
- Even where they do, recreating the exact compilation/execution environment is next to impossible.
- Are you using the same compiler and version
- Are you using the same numeric libraries (BLAS/MKL) and the same versions
- Does your system use the same precision in its floating point calculations
- etc etc etc
You give me your code and enough information for me to produce and identical environment or (even better) your code is insenstive the environment, then your research is Repeatable.
If you describe your study sufficiently well that I can re-implement your study from scratch, without looking at your code and still get the same answer, then it is Reproducible
If I can arrive at the same conclusions as you, just from a description of its aims, then it is Replicable.
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
add a comment |Â
up vote
1
down vote
In computaitonal research I would argue that something is "reproducible" if I can rerun your analysis with your code and get the same answer as you did. This might sound trivial, with differences really only being connected to pseudo-random number generation, but in fact this is far from the truth.
- Many computational studies do not provide their code.
- Even where they do, recreating the exact compilation/execution environment is next to impossible.
- Are you using the same compiler and version
- Are you using the same numeric libraries (BLAS/MKL) and the same versions
- Does your system use the same precision in its floating point calculations
- etc etc etc
You give me your code and enough information for me to produce and identical environment or (even better) your code is insenstive the environment, then your research is Repeatable.
If you describe your study sufficiently well that I can re-implement your study from scratch, without looking at your code and still get the same answer, then it is Reproducible
If I can arrive at the same conclusions as you, just from a description of its aims, then it is Replicable.
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
add a comment |Â
up vote
1
down vote
up vote
1
down vote
In computaitonal research I would argue that something is "reproducible" if I can rerun your analysis with your code and get the same answer as you did. This might sound trivial, with differences really only being connected to pseudo-random number generation, but in fact this is far from the truth.
- Many computational studies do not provide their code.
- Even where they do, recreating the exact compilation/execution environment is next to impossible.
- Are you using the same compiler and version
- Are you using the same numeric libraries (BLAS/MKL) and the same versions
- Does your system use the same precision in its floating point calculations
- etc etc etc
You give me your code and enough information for me to produce and identical environment or (even better) your code is insenstive the environment, then your research is Repeatable.
If you describe your study sufficiently well that I can re-implement your study from scratch, without looking at your code and still get the same answer, then it is Reproducible
If I can arrive at the same conclusions as you, just from a description of its aims, then it is Replicable.
In computaitonal research I would argue that something is "reproducible" if I can rerun your analysis with your code and get the same answer as you did. This might sound trivial, with differences really only being connected to pseudo-random number generation, but in fact this is far from the truth.
- Many computational studies do not provide their code.
- Even where they do, recreating the exact compilation/execution environment is next to impossible.
- Are you using the same compiler and version
- Are you using the same numeric libraries (BLAS/MKL) and the same versions
- Does your system use the same precision in its floating point calculations
- etc etc etc
You give me your code and enough information for me to produce and identical environment or (even better) your code is insenstive the environment, then your research is Repeatable.
If you describe your study sufficiently well that I can re-implement your study from scratch, without looking at your code and still get the same answer, then it is Reproducible
If I can arrive at the same conclusions as you, just from a description of its aims, then it is Replicable.
answered 35 mins ago
Ian Sudbery
4,5191220
4,5191220
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
add a comment |Â
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
I think this video is quite interesting about p-hacking: youtube.com/watch?v=42QuXLucH3Q. I dont why almost published researches dont present the codes or raw data, thats why almost researches are not replicable and that what happened reported that the researches done by Schon is not reproducible which was fabricated research en.wikipedia.org/wiki/Sch%C3%B6n_scandal
â Monika
8 mins ago
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f118505%2fwhat-is-the-difference-between-repeatability-replicability-and-reproducibility%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password