Are uncertainties higher than measured values realistic?
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
Whenever I measure a positive quantity (e.g. a volume) there is some uncertainty related to the measurement. The uncertainty will usually be quite low, e.g. lower than 10%, depending on the equipment. However, I have recently seen uncertainties (due to extrapolation) larger than the measurements, which seems counter-intuitive since the quantity is positive.
So my questions are:
- Do uncertainties larger than the measurements make sense?
- Or would it be more sensible to "enforce" an uncertainty no higher than the measurement?
(The word "measurement" might be poorly chosen in this context if we are including extrapolation.)
measurements error-analysis
New contributor
add a comment |Â
up vote
2
down vote
favorite
Whenever I measure a positive quantity (e.g. a volume) there is some uncertainty related to the measurement. The uncertainty will usually be quite low, e.g. lower than 10%, depending on the equipment. However, I have recently seen uncertainties (due to extrapolation) larger than the measurements, which seems counter-intuitive since the quantity is positive.
So my questions are:
- Do uncertainties larger than the measurements make sense?
- Or would it be more sensible to "enforce" an uncertainty no higher than the measurement?
(The word "measurement" might be poorly chosen in this context if we are including extrapolation.)
measurements error-analysis
New contributor
It is very bad practice to extrapolate very much beyond your measurements. I suggest you take measurements in the range of the extrapolations, or realize that the farther you extrapolate, the more uncertainty you will have to deal with.
â David White
2 hours ago
I would suggest Feldman & Cousins's paper for a detailed technical understanding of what's going on here. arxiv.org/abs/physics/9711021
â Sean E. Lake
15 mins ago
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
Whenever I measure a positive quantity (e.g. a volume) there is some uncertainty related to the measurement. The uncertainty will usually be quite low, e.g. lower than 10%, depending on the equipment. However, I have recently seen uncertainties (due to extrapolation) larger than the measurements, which seems counter-intuitive since the quantity is positive.
So my questions are:
- Do uncertainties larger than the measurements make sense?
- Or would it be more sensible to "enforce" an uncertainty no higher than the measurement?
(The word "measurement" might be poorly chosen in this context if we are including extrapolation.)
measurements error-analysis
New contributor
Whenever I measure a positive quantity (e.g. a volume) there is some uncertainty related to the measurement. The uncertainty will usually be quite low, e.g. lower than 10%, depending on the equipment. However, I have recently seen uncertainties (due to extrapolation) larger than the measurements, which seems counter-intuitive since the quantity is positive.
So my questions are:
- Do uncertainties larger than the measurements make sense?
- Or would it be more sensible to "enforce" an uncertainty no higher than the measurement?
(The word "measurement" might be poorly chosen in this context if we are including extrapolation.)
measurements error-analysis
measurements error-analysis
New contributor
New contributor
New contributor
asked 5 hours ago
Thomas
1112
1112
New contributor
New contributor
It is very bad practice to extrapolate very much beyond your measurements. I suggest you take measurements in the range of the extrapolations, or realize that the farther you extrapolate, the more uncertainty you will have to deal with.
â David White
2 hours ago
I would suggest Feldman & Cousins's paper for a detailed technical understanding of what's going on here. arxiv.org/abs/physics/9711021
â Sean E. Lake
15 mins ago
add a comment |Â
It is very bad practice to extrapolate very much beyond your measurements. I suggest you take measurements in the range of the extrapolations, or realize that the farther you extrapolate, the more uncertainty you will have to deal with.
â David White
2 hours ago
I would suggest Feldman & Cousins's paper for a detailed technical understanding of what's going on here. arxiv.org/abs/physics/9711021
â Sean E. Lake
15 mins ago
It is very bad practice to extrapolate very much beyond your measurements. I suggest you take measurements in the range of the extrapolations, or realize that the farther you extrapolate, the more uncertainty you will have to deal with.
â David White
2 hours ago
It is very bad practice to extrapolate very much beyond your measurements. I suggest you take measurements in the range of the extrapolations, or realize that the farther you extrapolate, the more uncertainty you will have to deal with.
â David White
2 hours ago
I would suggest Feldman & Cousins's paper for a detailed technical understanding of what's going on here. arxiv.org/abs/physics/9711021
â Sean E. Lake
15 mins ago
I would suggest Feldman & Cousins's paper for a detailed technical understanding of what's going on here. arxiv.org/abs/physics/9711021
â Sean E. Lake
15 mins ago
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
3
down vote
Indeed, uncertainties that large don't really make sense.
In reality, we have some probability distribution for the parameter we're describing. Uncertainty is an attempt to describe this distribution by two numbers, usually the mean and standard deviation.
This is only useful if the uncertainties are small, because often you'll end up combining a lot of similarly-sized uncertainties together (e.g. by averaging) and the central limit theorem will kick in, making your final distribution very nearly Gaussian. The mean and standard deviation of this Gaussian only depend on the means and standard deviations of the pieces; all other information is irrelevant.
But if you're looking at just a single quantity, with a very broad distribution, just knowing the standard deviation is just not useful. At that point it's probably better to give a 95% confidence interval instead. Of course the bottom of that interval would never be negative for a physical volume.
add a comment |Â
up vote
2
down vote
Uncertainties larger than measured values are common. Especially in measurements where the value is expected to be (close to) zero. For example values for the neutrino mass.
Or the difference between the $g$-values of the electron and the positron.
Or the electrical dipole moment of the electron.
add a comment |Â
up vote
0
down vote
If your extrapolated results have more than 100% uncertainty, which is possible, it just means that either you sample data was unrepresentative of the population, or that your extrapolation is wrong. Depending on what your experiment is, a linear extrapolation might lead to vastly incorrect results.
I'm not sure what you mean by 'enforce', but an uncertainty that high should tell you something is probably wrong. It makes sense to choose a cut-off point for your uncertainty, if that's what you mean by 'enforce', but your first strategy should probably be to look at how you're extrapolating.
add a comment |Â
up vote
-1
down vote
LetâÂÂs say you know roughly the size of the thing you need to locate 1m. But the gps is giving you uncertainty of 10m.
So the coordinates of the thing are $x=0+-5; y= 2+-5$
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
Indeed, uncertainties that large don't really make sense.
In reality, we have some probability distribution for the parameter we're describing. Uncertainty is an attempt to describe this distribution by two numbers, usually the mean and standard deviation.
This is only useful if the uncertainties are small, because often you'll end up combining a lot of similarly-sized uncertainties together (e.g. by averaging) and the central limit theorem will kick in, making your final distribution very nearly Gaussian. The mean and standard deviation of this Gaussian only depend on the means and standard deviations of the pieces; all other information is irrelevant.
But if you're looking at just a single quantity, with a very broad distribution, just knowing the standard deviation is just not useful. At that point it's probably better to give a 95% confidence interval instead. Of course the bottom of that interval would never be negative for a physical volume.
add a comment |Â
up vote
3
down vote
Indeed, uncertainties that large don't really make sense.
In reality, we have some probability distribution for the parameter we're describing. Uncertainty is an attempt to describe this distribution by two numbers, usually the mean and standard deviation.
This is only useful if the uncertainties are small, because often you'll end up combining a lot of similarly-sized uncertainties together (e.g. by averaging) and the central limit theorem will kick in, making your final distribution very nearly Gaussian. The mean and standard deviation of this Gaussian only depend on the means and standard deviations of the pieces; all other information is irrelevant.
But if you're looking at just a single quantity, with a very broad distribution, just knowing the standard deviation is just not useful. At that point it's probably better to give a 95% confidence interval instead. Of course the bottom of that interval would never be negative for a physical volume.
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Indeed, uncertainties that large don't really make sense.
In reality, we have some probability distribution for the parameter we're describing. Uncertainty is an attempt to describe this distribution by two numbers, usually the mean and standard deviation.
This is only useful if the uncertainties are small, because often you'll end up combining a lot of similarly-sized uncertainties together (e.g. by averaging) and the central limit theorem will kick in, making your final distribution very nearly Gaussian. The mean and standard deviation of this Gaussian only depend on the means and standard deviations of the pieces; all other information is irrelevant.
But if you're looking at just a single quantity, with a very broad distribution, just knowing the standard deviation is just not useful. At that point it's probably better to give a 95% confidence interval instead. Of course the bottom of that interval would never be negative for a physical volume.
Indeed, uncertainties that large don't really make sense.
In reality, we have some probability distribution for the parameter we're describing. Uncertainty is an attempt to describe this distribution by two numbers, usually the mean and standard deviation.
This is only useful if the uncertainties are small, because often you'll end up combining a lot of similarly-sized uncertainties together (e.g. by averaging) and the central limit theorem will kick in, making your final distribution very nearly Gaussian. The mean and standard deviation of this Gaussian only depend on the means and standard deviations of the pieces; all other information is irrelevant.
But if you're looking at just a single quantity, with a very broad distribution, just knowing the standard deviation is just not useful. At that point it's probably better to give a 95% confidence interval instead. Of course the bottom of that interval would never be negative for a physical volume.
answered 4 hours ago
knzhou
37.1k9104178
37.1k9104178
add a comment |Â
add a comment |Â
up vote
2
down vote
Uncertainties larger than measured values are common. Especially in measurements where the value is expected to be (close to) zero. For example values for the neutrino mass.
Or the difference between the $g$-values of the electron and the positron.
Or the electrical dipole moment of the electron.
add a comment |Â
up vote
2
down vote
Uncertainties larger than measured values are common. Especially in measurements where the value is expected to be (close to) zero. For example values for the neutrino mass.
Or the difference between the $g$-values of the electron and the positron.
Or the electrical dipole moment of the electron.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Uncertainties larger than measured values are common. Especially in measurements where the value is expected to be (close to) zero. For example values for the neutrino mass.
Or the difference between the $g$-values of the electron and the positron.
Or the electrical dipole moment of the electron.
Uncertainties larger than measured values are common. Especially in measurements where the value is expected to be (close to) zero. For example values for the neutrino mass.
Or the difference between the $g$-values of the electron and the positron.
Or the electrical dipole moment of the electron.
edited 23 mins ago
answered 3 hours ago
Pieter
6,52131228
6,52131228
add a comment |Â
add a comment |Â
up vote
0
down vote
If your extrapolated results have more than 100% uncertainty, which is possible, it just means that either you sample data was unrepresentative of the population, or that your extrapolation is wrong. Depending on what your experiment is, a linear extrapolation might lead to vastly incorrect results.
I'm not sure what you mean by 'enforce', but an uncertainty that high should tell you something is probably wrong. It makes sense to choose a cut-off point for your uncertainty, if that's what you mean by 'enforce', but your first strategy should probably be to look at how you're extrapolating.
add a comment |Â
up vote
0
down vote
If your extrapolated results have more than 100% uncertainty, which is possible, it just means that either you sample data was unrepresentative of the population, or that your extrapolation is wrong. Depending on what your experiment is, a linear extrapolation might lead to vastly incorrect results.
I'm not sure what you mean by 'enforce', but an uncertainty that high should tell you something is probably wrong. It makes sense to choose a cut-off point for your uncertainty, if that's what you mean by 'enforce', but your first strategy should probably be to look at how you're extrapolating.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
If your extrapolated results have more than 100% uncertainty, which is possible, it just means that either you sample data was unrepresentative of the population, or that your extrapolation is wrong. Depending on what your experiment is, a linear extrapolation might lead to vastly incorrect results.
I'm not sure what you mean by 'enforce', but an uncertainty that high should tell you something is probably wrong. It makes sense to choose a cut-off point for your uncertainty, if that's what you mean by 'enforce', but your first strategy should probably be to look at how you're extrapolating.
If your extrapolated results have more than 100% uncertainty, which is possible, it just means that either you sample data was unrepresentative of the population, or that your extrapolation is wrong. Depending on what your experiment is, a linear extrapolation might lead to vastly incorrect results.
I'm not sure what you mean by 'enforce', but an uncertainty that high should tell you something is probably wrong. It makes sense to choose a cut-off point for your uncertainty, if that's what you mean by 'enforce', but your first strategy should probably be to look at how you're extrapolating.
answered 4 hours ago
rotowhirl
13
13
add a comment |Â
add a comment |Â
up vote
-1
down vote
LetâÂÂs say you know roughly the size of the thing you need to locate 1m. But the gps is giving you uncertainty of 10m.
So the coordinates of the thing are $x=0+-5; y= 2+-5$
add a comment |Â
up vote
-1
down vote
LetâÂÂs say you know roughly the size of the thing you need to locate 1m. But the gps is giving you uncertainty of 10m.
So the coordinates of the thing are $x=0+-5; y= 2+-5$
add a comment |Â
up vote
-1
down vote
up vote
-1
down vote
LetâÂÂs say you know roughly the size of the thing you need to locate 1m. But the gps is giving you uncertainty of 10m.
So the coordinates of the thing are $x=0+-5; y= 2+-5$
LetâÂÂs say you know roughly the size of the thing you need to locate 1m. But the gps is giving you uncertainty of 10m.
So the coordinates of the thing are $x=0+-5; y= 2+-5$
answered 4 hours ago
user591849
293
293
add a comment |Â
add a comment |Â
Thomas is a new contributor. Be nice, and check out our Code of Conduct.
Thomas is a new contributor. Be nice, and check out our Code of Conduct.
Thomas is a new contributor. Be nice, and check out our Code of Conduct.
Thomas is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f436747%2fare-uncertainties-higher-than-measured-values-realistic%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
It is very bad practice to extrapolate very much beyond your measurements. I suggest you take measurements in the range of the extrapolations, or realize that the farther you extrapolate, the more uncertainty you will have to deal with.
â David White
2 hours ago
I would suggest Feldman & Cousins's paper for a detailed technical understanding of what's going on here. arxiv.org/abs/physics/9711021
â Sean E. Lake
15 mins ago