Are opinion polls always representative of the extreme opinions?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
30
down vote

favorite
3












Opinion polls involves a choice. The people who are being asked to take the poll can say yes or no.



I suspect that this leads to such a significant amount of self-selection bias that I am starting to think that most opinion polls are by and large misleading due to them representing the views of the extreme and not the average. Am I correct in this position?



Here's what I mean. If you, say, call up a person and ask for five minutes of their time to do a poll, most likely they'll say no thank you. Most people ... just aren't interested in doing polls, right?



So if a person actually says yes, that would suggest that this person is abnormal in the sense that they actually want to do the poll. Why would that be? Well, there are many possible reasons (lonely elderly, perhaps?), but certainly one possible reason is that this person just happens to have an extreme opinion on the topic that they are about the be polled on, and therefore has an increased interested in making that opinion known. The average person, having the average opinion, will therefore be underrepresented in such polls, while the extreme person will be eager to express their extreme opinion



For example, if I received a call today by a pollster asking for a few questions on "LGBT", I'd pass. I have no strong opinions on the topic, am not LGBT myself, nor do I hate LGBT members. It's just not relevant to my life, so I have little to say. However, if I happened to be LGBT myself, or if I happened to be somebody who strongly disliked the LGBT community, I might very well be interested in taking such a poll. Hence the poll becomes skewed and representative of more extreme opinions.







share|improve this question
















  • 23




    This is an interesting question. But the actual problem gets much deeper. There are a lot more problems than just sampling bias which can make political opinion polls unreliable. Leading questions, leading answer options, context. This clip from the UK TV show Yes Prime Minister is satire, but very close to reality.
    – Philipp♦
    Aug 23 at 11:06







  • 16




    I've heard more about the opposite problem. Some extremist views may be socially unacceptable so they will be unrepresented in the polls. For instance if a poll asked if all the Jews should be sent to gas chambers the number of people wanting to do so will be unrepresented because it (rightfully) is a social stigma with having such views so they may decline to answer or say they are undecided instead.
    – liftarn
    Aug 23 at 11:38






  • 3




    You could even generalize this for all political actions. If I have no strong opinion on LGBT rights, I probably won't agitate for or against them. The discourse is then only driven by the extreme poles, decision-making is done by the opinion in power...
    – Cliff
    Aug 23 at 13:34






  • 1




    @AzorAhai - though I suspect your question was of a more rhetorical nature.... no, most opinion polls are not primarily binary in their inquiries.
    – PoloHoleSet
    Aug 23 at 20:18






  • 1




    @PoloHoleSet I'm not a pollster or politics geek so I didn't know what was standard, but Nemsia seemed to make a pretty definitive statement
    – Azor Ahai
    Aug 23 at 20:18














up vote
30
down vote

favorite
3












Opinion polls involves a choice. The people who are being asked to take the poll can say yes or no.



I suspect that this leads to such a significant amount of self-selection bias that I am starting to think that most opinion polls are by and large misleading due to them representing the views of the extreme and not the average. Am I correct in this position?



Here's what I mean. If you, say, call up a person and ask for five minutes of their time to do a poll, most likely they'll say no thank you. Most people ... just aren't interested in doing polls, right?



So if a person actually says yes, that would suggest that this person is abnormal in the sense that they actually want to do the poll. Why would that be? Well, there are many possible reasons (lonely elderly, perhaps?), but certainly one possible reason is that this person just happens to have an extreme opinion on the topic that they are about the be polled on, and therefore has an increased interested in making that opinion known. The average person, having the average opinion, will therefore be underrepresented in such polls, while the extreme person will be eager to express their extreme opinion



For example, if I received a call today by a pollster asking for a few questions on "LGBT", I'd pass. I have no strong opinions on the topic, am not LGBT myself, nor do I hate LGBT members. It's just not relevant to my life, so I have little to say. However, if I happened to be LGBT myself, or if I happened to be somebody who strongly disliked the LGBT community, I might very well be interested in taking such a poll. Hence the poll becomes skewed and representative of more extreme opinions.







share|improve this question
















  • 23




    This is an interesting question. But the actual problem gets much deeper. There are a lot more problems than just sampling bias which can make political opinion polls unreliable. Leading questions, leading answer options, context. This clip from the UK TV show Yes Prime Minister is satire, but very close to reality.
    – Philipp♦
    Aug 23 at 11:06







  • 16




    I've heard more about the opposite problem. Some extremist views may be socially unacceptable so they will be unrepresented in the polls. For instance if a poll asked if all the Jews should be sent to gas chambers the number of people wanting to do so will be unrepresented because it (rightfully) is a social stigma with having such views so they may decline to answer or say they are undecided instead.
    – liftarn
    Aug 23 at 11:38






  • 3




    You could even generalize this for all political actions. If I have no strong opinion on LGBT rights, I probably won't agitate for or against them. The discourse is then only driven by the extreme poles, decision-making is done by the opinion in power...
    – Cliff
    Aug 23 at 13:34






  • 1




    @AzorAhai - though I suspect your question was of a more rhetorical nature.... no, most opinion polls are not primarily binary in their inquiries.
    – PoloHoleSet
    Aug 23 at 20:18






  • 1




    @PoloHoleSet I'm not a pollster or politics geek so I didn't know what was standard, but Nemsia seemed to make a pretty definitive statement
    – Azor Ahai
    Aug 23 at 20:18












up vote
30
down vote

favorite
3









up vote
30
down vote

favorite
3






3





Opinion polls involves a choice. The people who are being asked to take the poll can say yes or no.



I suspect that this leads to such a significant amount of self-selection bias that I am starting to think that most opinion polls are by and large misleading due to them representing the views of the extreme and not the average. Am I correct in this position?



Here's what I mean. If you, say, call up a person and ask for five minutes of their time to do a poll, most likely they'll say no thank you. Most people ... just aren't interested in doing polls, right?



So if a person actually says yes, that would suggest that this person is abnormal in the sense that they actually want to do the poll. Why would that be? Well, there are many possible reasons (lonely elderly, perhaps?), but certainly one possible reason is that this person just happens to have an extreme opinion on the topic that they are about the be polled on, and therefore has an increased interested in making that opinion known. The average person, having the average opinion, will therefore be underrepresented in such polls, while the extreme person will be eager to express their extreme opinion



For example, if I received a call today by a pollster asking for a few questions on "LGBT", I'd pass. I have no strong opinions on the topic, am not LGBT myself, nor do I hate LGBT members. It's just not relevant to my life, so I have little to say. However, if I happened to be LGBT myself, or if I happened to be somebody who strongly disliked the LGBT community, I might very well be interested in taking such a poll. Hence the poll becomes skewed and representative of more extreme opinions.







share|improve this question












Opinion polls involves a choice. The people who are being asked to take the poll can say yes or no.



I suspect that this leads to such a significant amount of self-selection bias that I am starting to think that most opinion polls are by and large misleading due to them representing the views of the extreme and not the average. Am I correct in this position?



Here's what I mean. If you, say, call up a person and ask for five minutes of their time to do a poll, most likely they'll say no thank you. Most people ... just aren't interested in doing polls, right?



So if a person actually says yes, that would suggest that this person is abnormal in the sense that they actually want to do the poll. Why would that be? Well, there are many possible reasons (lonely elderly, perhaps?), but certainly one possible reason is that this person just happens to have an extreme opinion on the topic that they are about the be polled on, and therefore has an increased interested in making that opinion known. The average person, having the average opinion, will therefore be underrepresented in such polls, while the extreme person will be eager to express their extreme opinion



For example, if I received a call today by a pollster asking for a few questions on "LGBT", I'd pass. I have no strong opinions on the topic, am not LGBT myself, nor do I hate LGBT members. It's just not relevant to my life, so I have little to say. However, if I happened to be LGBT myself, or if I happened to be somebody who strongly disliked the LGBT community, I might very well be interested in taking such a poll. Hence the poll becomes skewed and representative of more extreme opinions.









share|improve this question











share|improve this question




share|improve this question










asked Aug 23 at 10:40









Nemsia

15423




15423







  • 23




    This is an interesting question. But the actual problem gets much deeper. There are a lot more problems than just sampling bias which can make political opinion polls unreliable. Leading questions, leading answer options, context. This clip from the UK TV show Yes Prime Minister is satire, but very close to reality.
    – Philipp♦
    Aug 23 at 11:06







  • 16




    I've heard more about the opposite problem. Some extremist views may be socially unacceptable so they will be unrepresented in the polls. For instance if a poll asked if all the Jews should be sent to gas chambers the number of people wanting to do so will be unrepresented because it (rightfully) is a social stigma with having such views so they may decline to answer or say they are undecided instead.
    – liftarn
    Aug 23 at 11:38






  • 3




    You could even generalize this for all political actions. If I have no strong opinion on LGBT rights, I probably won't agitate for or against them. The discourse is then only driven by the extreme poles, decision-making is done by the opinion in power...
    – Cliff
    Aug 23 at 13:34






  • 1




    @AzorAhai - though I suspect your question was of a more rhetorical nature.... no, most opinion polls are not primarily binary in their inquiries.
    – PoloHoleSet
    Aug 23 at 20:18






  • 1




    @PoloHoleSet I'm not a pollster or politics geek so I didn't know what was standard, but Nemsia seemed to make a pretty definitive statement
    – Azor Ahai
    Aug 23 at 20:18












  • 23




    This is an interesting question. But the actual problem gets much deeper. There are a lot more problems than just sampling bias which can make political opinion polls unreliable. Leading questions, leading answer options, context. This clip from the UK TV show Yes Prime Minister is satire, but very close to reality.
    – Philipp♦
    Aug 23 at 11:06







  • 16




    I've heard more about the opposite problem. Some extremist views may be socially unacceptable so they will be unrepresented in the polls. For instance if a poll asked if all the Jews should be sent to gas chambers the number of people wanting to do so will be unrepresented because it (rightfully) is a social stigma with having such views so they may decline to answer or say they are undecided instead.
    – liftarn
    Aug 23 at 11:38






  • 3




    You could even generalize this for all political actions. If I have no strong opinion on LGBT rights, I probably won't agitate for or against them. The discourse is then only driven by the extreme poles, decision-making is done by the opinion in power...
    – Cliff
    Aug 23 at 13:34






  • 1




    @AzorAhai - though I suspect your question was of a more rhetorical nature.... no, most opinion polls are not primarily binary in their inquiries.
    – PoloHoleSet
    Aug 23 at 20:18






  • 1




    @PoloHoleSet I'm not a pollster or politics geek so I didn't know what was standard, but Nemsia seemed to make a pretty definitive statement
    – Azor Ahai
    Aug 23 at 20:18







23




23




This is an interesting question. But the actual problem gets much deeper. There are a lot more problems than just sampling bias which can make political opinion polls unreliable. Leading questions, leading answer options, context. This clip from the UK TV show Yes Prime Minister is satire, but very close to reality.
– Philipp♦
Aug 23 at 11:06





This is an interesting question. But the actual problem gets much deeper. There are a lot more problems than just sampling bias which can make political opinion polls unreliable. Leading questions, leading answer options, context. This clip from the UK TV show Yes Prime Minister is satire, but very close to reality.
– Philipp♦
Aug 23 at 11:06





16




16




I've heard more about the opposite problem. Some extremist views may be socially unacceptable so they will be unrepresented in the polls. For instance if a poll asked if all the Jews should be sent to gas chambers the number of people wanting to do so will be unrepresented because it (rightfully) is a social stigma with having such views so they may decline to answer or say they are undecided instead.
– liftarn
Aug 23 at 11:38




I've heard more about the opposite problem. Some extremist views may be socially unacceptable so they will be unrepresented in the polls. For instance if a poll asked if all the Jews should be sent to gas chambers the number of people wanting to do so will be unrepresented because it (rightfully) is a social stigma with having such views so they may decline to answer or say they are undecided instead.
– liftarn
Aug 23 at 11:38




3




3




You could even generalize this for all political actions. If I have no strong opinion on LGBT rights, I probably won't agitate for or against them. The discourse is then only driven by the extreme poles, decision-making is done by the opinion in power...
– Cliff
Aug 23 at 13:34




You could even generalize this for all political actions. If I have no strong opinion on LGBT rights, I probably won't agitate for or against them. The discourse is then only driven by the extreme poles, decision-making is done by the opinion in power...
– Cliff
Aug 23 at 13:34




1




1




@AzorAhai - though I suspect your question was of a more rhetorical nature.... no, most opinion polls are not primarily binary in their inquiries.
– PoloHoleSet
Aug 23 at 20:18




@AzorAhai - though I suspect your question was of a more rhetorical nature.... no, most opinion polls are not primarily binary in their inquiries.
– PoloHoleSet
Aug 23 at 20:18




1




1




@PoloHoleSet I'm not a pollster or politics geek so I didn't know what was standard, but Nemsia seemed to make a pretty definitive statement
– Azor Ahai
Aug 23 at 20:18




@PoloHoleSet I'm not a pollster or politics geek so I didn't know what was standard, but Nemsia seemed to make a pretty definitive statement
– Azor Ahai
Aug 23 at 20:18










4 Answers
4






active

oldest

votes

















up vote
27
down vote













The survey/poll participants not being all equally likely to answer is a well-known problem called participation bias aka non-response bias. A typical example:




A study of nonrespondents from the National AIDS Behavioral Survey (NABS) was conducted in 1990 to attempt to identify potential differences in participants and non‐participants that may influence estimates of sexual risk behavior. [...] Results indicate that refusers are older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.




The effect probably depends on the topic of the poll. One (highly cited) meta-analysis has among its conclusions:




Large nonresponse biases can happen in surveys.
High response rates can reduce the risks of bias. They do this less when
the causes of participation are highly correlated with the survey variables.
Indeed, in the studies we assembled, some surveys with low nonresponse
rates have estimates with high relative nonresponse bias.




The interviewer effect is related in that some people may refuse to talk to some interviewers e.g. based on the race or age of both, or just not like to talk to people in general, creating a specific non-response bias. E.g.:




An analysis of the 2004 and 2008 [US election] phone surveys and exit polls reveals differing patterns of item non-response across the two interview modes.



enter image description here




So to answer your title question "Are opinion polls always representative of the extreme opinions?" (My emphasis.) I think the answer is clearly no. The non-response bias isn't necessarily equally likely to favor both extremes (and to exclude the "middle"). Depending on the study design (topic, participants, mode of survey), it can favor one extreme, both or the "middle". In fact, detecting the pattern of the non-response bias (for a given study) is a non-trivial problem. Quoting again from the meta-analysis:




We cannot rely on full or partial canceling of nonresponse biases when
we subtract one subclass mean from another. The bias of the difference is a
function of differences of response rates and covariances between
response propensities of the subgroups and the survey variable.




References



* Heather A. Turner (1999) Participation bias in AIDS‐related telephone surveys: Results from the national AIDS behavioral survey (NABS) non‐response study, The Journal of Sex Research, 36:1, 52-58, DOI: 10.1080/00224499909551967



* Robert M. Groves, Emilia Peytcheva; The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis, Public Opinion Quarterly, Volume 72, Issue 2, 1 January 2008, Pages 167–189, DOI: 10.1093/poq/nfn011



* Michael P. McDonald, Matthew P. Thornburg; Interview Mode Effects: The Case of Exit Polls and Early Voting, Public Opinion Quarterly, Volume 76, Issue 2, 1 July 2012, Pages 326–349, DOI: 10.1093/poq/nfs025






share|improve this answer


















  • 1




    +1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
    – indigochild
    Aug 23 at 22:47






  • 1




    @indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
    – De Novo
    Aug 23 at 23:25










  • One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
    – ohwilleke
    Aug 27 at 2:34

















up vote
10
down vote













Defining an opinion poll as "an assessment of public opinion obtained by questioning a representative sample," the answer to your question is no. Opinion polls are not always overly representative of extreme opinions.



If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average.



The problem arises when using voluntarily response survey methodology and is exacerbated by a low response rate. The problem of non-representative samples is not unique to opinion polls, but polls in general.






share|improve this answer
















  • 3




    This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
    – agc
    Aug 23 at 17:44










  • If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
    – gerrit
    Aug 24 at 9:40










  • @gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
    – Underminer
    Aug 24 at 13:09










  • @gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
    – KRyan
    Aug 24 at 14:14










  • (That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
    – KRyan
    Aug 24 at 14:15

















up vote
3
down vote













This is a known problem with polls. Along with sampling error (they accidentally picked an unrepresentative group of people to call), polls may be skewed by people refusing to participate.



Pollsters attempt to control for both effects by comparing to more reliable data. For example, if it is a political poll, they may compare the demographics of the respondents to those of the region as a whole. In particular, they often try to control for political identification, that is to say, party registration. If their poll has too many Democrats to Republicans, they may decide to call more people looking for more Republicans. But of course that has a problem too.



What if the Republicans who answer the poll are unrepresentative of Republicans overall? For example, there is a group of Republicans called "Never Trump" Republicans. What if they answer polls more often than pro-Trump Republicans? This might introduce more skew.



Most pollsters use other demographic data. They'll often ask age for example. And they may also use other questions in the poll as controls. For example, they may ask someone's opinion on Donald Trump or how the person voted in the last election (or both).



Some pollsters do reference polls with the same people every time. These give baselines for how people should answer certain questions. They or other pollsters can then do comparisons of other polls to the baseline polls to look for skew. Or compare the baseline polls to actual data. For example, if you're polling an election, the election itself can be a base line to which the polls can be compared. Since who voted is generally public information, you can compare the actual demographics to the polled demographics.






share|improve this answer


















  • 1




    @Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
    – hszmv
    Aug 23 at 14:00






  • 2




    Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
    – De Novo
    Aug 23 at 21:04










  • -1 for the reason De Novo said. Error is not bias.
    – indigochild
    Aug 24 at 18:57

















up vote
-3
down vote













The TV series Yes Prime Minister (S01E02 "The Ministerial Broadcast") explains all about this:




Sir Humphrey Appleby: [demonstrating how public surveys can reach
opposite conclusions] Mr. Woolley, are you worried about the rise in
crime among teenagers?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there is lack of discipline and
vigorous training in our Comprehensive Schools?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think young people welcome some structure
and leadership in their lives?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do they respond to a challenge?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Might you be in favour of reintroducing National
Service?



Bernard Woolley: Er, I might be.



Sir Humphrey Appleby: Yes or no?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Of course, after all you've said you can't say
no to that. On the other hand, the surveys can reach opposite
conclusions.



[survey two]



Sir Humphrey Appleby: Mr. Woolley, are you worried about the danger of
war?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Are you unhappy about the growth of armaments?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there's a danger in giving young
people guns and teaching them how to kill?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think it's wrong to force people to take
arms against their will?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Would you oppose the reintroduction of
conscription?



Bernard Woolley: Yes.



[does a double-take]



Sir Humphrey Appleby: There you are, Bernard. The perfectly balanced
sample.




More often than not opinion polls support the views, desires or prejudices of the people that funded them. This may explain to a some extent the differences between the polls and the election results for Trump-Clinton and Brexit, for example.






share|improve this answer






















  • The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
    – JJJ
    Aug 23 at 14:40






  • 1




    Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
    – Jonathan Rosenne
    Aug 23 at 14:43










  • Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
    – JJJ
    Aug 23 at 14:51






  • 1




    youtube.com/watch?v=PKiTNC96yvs
    – Jonathan Rosenne
    Aug 23 at 17:49






  • 1




    @JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
    – Wildcard
    Aug 24 at 2:35










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "475"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fpolitics.stackexchange.com%2fquestions%2f33157%2fare-opinion-polls-always-representative-of-the-extreme-opinions%23new-answer', 'question_page');

);

Post as a guest






























4 Answers
4






active

oldest

votes








4 Answers
4






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
27
down vote













The survey/poll participants not being all equally likely to answer is a well-known problem called participation bias aka non-response bias. A typical example:




A study of nonrespondents from the National AIDS Behavioral Survey (NABS) was conducted in 1990 to attempt to identify potential differences in participants and non‐participants that may influence estimates of sexual risk behavior. [...] Results indicate that refusers are older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.




The effect probably depends on the topic of the poll. One (highly cited) meta-analysis has among its conclusions:




Large nonresponse biases can happen in surveys.
High response rates can reduce the risks of bias. They do this less when
the causes of participation are highly correlated with the survey variables.
Indeed, in the studies we assembled, some surveys with low nonresponse
rates have estimates with high relative nonresponse bias.




The interviewer effect is related in that some people may refuse to talk to some interviewers e.g. based on the race or age of both, or just not like to talk to people in general, creating a specific non-response bias. E.g.:




An analysis of the 2004 and 2008 [US election] phone surveys and exit polls reveals differing patterns of item non-response across the two interview modes.



enter image description here




So to answer your title question "Are opinion polls always representative of the extreme opinions?" (My emphasis.) I think the answer is clearly no. The non-response bias isn't necessarily equally likely to favor both extremes (and to exclude the "middle"). Depending on the study design (topic, participants, mode of survey), it can favor one extreme, both or the "middle". In fact, detecting the pattern of the non-response bias (for a given study) is a non-trivial problem. Quoting again from the meta-analysis:




We cannot rely on full or partial canceling of nonresponse biases when
we subtract one subclass mean from another. The bias of the difference is a
function of differences of response rates and covariances between
response propensities of the subgroups and the survey variable.




References



* Heather A. Turner (1999) Participation bias in AIDS‐related telephone surveys: Results from the national AIDS behavioral survey (NABS) non‐response study, The Journal of Sex Research, 36:1, 52-58, DOI: 10.1080/00224499909551967



* Robert M. Groves, Emilia Peytcheva; The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis, Public Opinion Quarterly, Volume 72, Issue 2, 1 January 2008, Pages 167–189, DOI: 10.1093/poq/nfn011



* Michael P. McDonald, Matthew P. Thornburg; Interview Mode Effects: The Case of Exit Polls and Early Voting, Public Opinion Quarterly, Volume 76, Issue 2, 1 July 2012, Pages 326–349, DOI: 10.1093/poq/nfs025






share|improve this answer


















  • 1




    +1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
    – indigochild
    Aug 23 at 22:47






  • 1




    @indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
    – De Novo
    Aug 23 at 23:25










  • One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
    – ohwilleke
    Aug 27 at 2:34














up vote
27
down vote













The survey/poll participants not being all equally likely to answer is a well-known problem called participation bias aka non-response bias. A typical example:




A study of nonrespondents from the National AIDS Behavioral Survey (NABS) was conducted in 1990 to attempt to identify potential differences in participants and non‐participants that may influence estimates of sexual risk behavior. [...] Results indicate that refusers are older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.




The effect probably depends on the topic of the poll. One (highly cited) meta-analysis has among its conclusions:




Large nonresponse biases can happen in surveys.
High response rates can reduce the risks of bias. They do this less when
the causes of participation are highly correlated with the survey variables.
Indeed, in the studies we assembled, some surveys with low nonresponse
rates have estimates with high relative nonresponse bias.




The interviewer effect is related in that some people may refuse to talk to some interviewers e.g. based on the race or age of both, or just not like to talk to people in general, creating a specific non-response bias. E.g.:




An analysis of the 2004 and 2008 [US election] phone surveys and exit polls reveals differing patterns of item non-response across the two interview modes.



enter image description here




So to answer your title question "Are opinion polls always representative of the extreme opinions?" (My emphasis.) I think the answer is clearly no. The non-response bias isn't necessarily equally likely to favor both extremes (and to exclude the "middle"). Depending on the study design (topic, participants, mode of survey), it can favor one extreme, both or the "middle". In fact, detecting the pattern of the non-response bias (for a given study) is a non-trivial problem. Quoting again from the meta-analysis:




We cannot rely on full or partial canceling of nonresponse biases when
we subtract one subclass mean from another. The bias of the difference is a
function of differences of response rates and covariances between
response propensities of the subgroups and the survey variable.




References



* Heather A. Turner (1999) Participation bias in AIDS‐related telephone surveys: Results from the national AIDS behavioral survey (NABS) non‐response study, The Journal of Sex Research, 36:1, 52-58, DOI: 10.1080/00224499909551967



* Robert M. Groves, Emilia Peytcheva; The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis, Public Opinion Quarterly, Volume 72, Issue 2, 1 January 2008, Pages 167–189, DOI: 10.1093/poq/nfn011



* Michael P. McDonald, Matthew P. Thornburg; Interview Mode Effects: The Case of Exit Polls and Early Voting, Public Opinion Quarterly, Volume 76, Issue 2, 1 July 2012, Pages 326–349, DOI: 10.1093/poq/nfs025






share|improve this answer


















  • 1




    +1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
    – indigochild
    Aug 23 at 22:47






  • 1




    @indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
    – De Novo
    Aug 23 at 23:25










  • One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
    – ohwilleke
    Aug 27 at 2:34












up vote
27
down vote










up vote
27
down vote









The survey/poll participants not being all equally likely to answer is a well-known problem called participation bias aka non-response bias. A typical example:




A study of nonrespondents from the National AIDS Behavioral Survey (NABS) was conducted in 1990 to attempt to identify potential differences in participants and non‐participants that may influence estimates of sexual risk behavior. [...] Results indicate that refusers are older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.




The effect probably depends on the topic of the poll. One (highly cited) meta-analysis has among its conclusions:




Large nonresponse biases can happen in surveys.
High response rates can reduce the risks of bias. They do this less when
the causes of participation are highly correlated with the survey variables.
Indeed, in the studies we assembled, some surveys with low nonresponse
rates have estimates with high relative nonresponse bias.




The interviewer effect is related in that some people may refuse to talk to some interviewers e.g. based on the race or age of both, or just not like to talk to people in general, creating a specific non-response bias. E.g.:




An analysis of the 2004 and 2008 [US election] phone surveys and exit polls reveals differing patterns of item non-response across the two interview modes.



enter image description here




So to answer your title question "Are opinion polls always representative of the extreme opinions?" (My emphasis.) I think the answer is clearly no. The non-response bias isn't necessarily equally likely to favor both extremes (and to exclude the "middle"). Depending on the study design (topic, participants, mode of survey), it can favor one extreme, both or the "middle". In fact, detecting the pattern of the non-response bias (for a given study) is a non-trivial problem. Quoting again from the meta-analysis:




We cannot rely on full or partial canceling of nonresponse biases when
we subtract one subclass mean from another. The bias of the difference is a
function of differences of response rates and covariances between
response propensities of the subgroups and the survey variable.




References



* Heather A. Turner (1999) Participation bias in AIDS‐related telephone surveys: Results from the national AIDS behavioral survey (NABS) non‐response study, The Journal of Sex Research, 36:1, 52-58, DOI: 10.1080/00224499909551967



* Robert M. Groves, Emilia Peytcheva; The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis, Public Opinion Quarterly, Volume 72, Issue 2, 1 January 2008, Pages 167–189, DOI: 10.1093/poq/nfn011



* Michael P. McDonald, Matthew P. Thornburg; Interview Mode Effects: The Case of Exit Polls and Early Voting, Public Opinion Quarterly, Volume 76, Issue 2, 1 July 2012, Pages 326–349, DOI: 10.1093/poq/nfs025






share|improve this answer














The survey/poll participants not being all equally likely to answer is a well-known problem called participation bias aka non-response bias. A typical example:




A study of nonrespondents from the National AIDS Behavioral Survey (NABS) was conducted in 1990 to attempt to identify potential differences in participants and non‐participants that may influence estimates of sexual risk behavior. [...] Results indicate that refusers are older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.




The effect probably depends on the topic of the poll. One (highly cited) meta-analysis has among its conclusions:




Large nonresponse biases can happen in surveys.
High response rates can reduce the risks of bias. They do this less when
the causes of participation are highly correlated with the survey variables.
Indeed, in the studies we assembled, some surveys with low nonresponse
rates have estimates with high relative nonresponse bias.




The interviewer effect is related in that some people may refuse to talk to some interviewers e.g. based on the race or age of both, or just not like to talk to people in general, creating a specific non-response bias. E.g.:




An analysis of the 2004 and 2008 [US election] phone surveys and exit polls reveals differing patterns of item non-response across the two interview modes.



enter image description here




So to answer your title question "Are opinion polls always representative of the extreme opinions?" (My emphasis.) I think the answer is clearly no. The non-response bias isn't necessarily equally likely to favor both extremes (and to exclude the "middle"). Depending on the study design (topic, participants, mode of survey), it can favor one extreme, both or the "middle". In fact, detecting the pattern of the non-response bias (for a given study) is a non-trivial problem. Quoting again from the meta-analysis:




We cannot rely on full or partial canceling of nonresponse biases when
we subtract one subclass mean from another. The bias of the difference is a
function of differences of response rates and covariances between
response propensities of the subgroups and the survey variable.




References



* Heather A. Turner (1999) Participation bias in AIDS‐related telephone surveys: Results from the national AIDS behavioral survey (NABS) non‐response study, The Journal of Sex Research, 36:1, 52-58, DOI: 10.1080/00224499909551967



* Robert M. Groves, Emilia Peytcheva; The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis, Public Opinion Quarterly, Volume 72, Issue 2, 1 January 2008, Pages 167–189, DOI: 10.1093/poq/nfn011



* Michael P. McDonald, Matthew P. Thornburg; Interview Mode Effects: The Case of Exit Polls and Early Voting, Public Opinion Quarterly, Volume 76, Issue 2, 1 July 2012, Pages 326–349, DOI: 10.1093/poq/nfs025







share|improve this answer














share|improve this answer



share|improve this answer








edited Aug 23 at 13:05

























answered Aug 23 at 12:29









Fizz

7,92012164




7,92012164







  • 1




    +1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
    – indigochild
    Aug 23 at 22:47






  • 1




    @indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
    – De Novo
    Aug 23 at 23:25










  • One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
    – ohwilleke
    Aug 27 at 2:34












  • 1




    +1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
    – indigochild
    Aug 23 at 22:47






  • 1




    @indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
    – De Novo
    Aug 23 at 23:25










  • One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
    – ohwilleke
    Aug 27 at 2:34







1




1




+1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
– indigochild
Aug 23 at 22:47




+1. It might be helpful to point out that just because there are non-respondents doesn't mean there is non-response bias. And reducing non-respondents isn't particularly helpful in reducing non-response bias. Cit.: academic.oup.com/poq/article/70/5/646/4084443
– indigochild
Aug 23 at 22:47




1




1




@indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
– De Novo
Aug 23 at 23:25




@indigochild that article doesn't say reducing non-respondents isn't particularly helpful in reducing non-response bias. It says reducing non-response rate doesn't necessarily reduce non-response bias. This is a very different statement. Specifically, if you reduce the non-response rate in a way that is biased, you may end up increasing non-response bias.
– De Novo
Aug 23 at 23:25












One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
– ohwilleke
Aug 27 at 2:34




One common response to non-response bias is for pollsters to re-weigh different responses so that the adjusted sample represents the true demographics of the target population (e.g. voters or citizens). This helps, but can skill be problematic if the respondent from a demographic with a high non-response rate is atypical of responders in that demographic, or if the target population's demographics are in flux.
– ohwilleke
Aug 27 at 2:34










up vote
10
down vote













Defining an opinion poll as "an assessment of public opinion obtained by questioning a representative sample," the answer to your question is no. Opinion polls are not always overly representative of extreme opinions.



If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average.



The problem arises when using voluntarily response survey methodology and is exacerbated by a low response rate. The problem of non-representative samples is not unique to opinion polls, but polls in general.






share|improve this answer
















  • 3




    This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
    – agc
    Aug 23 at 17:44










  • If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
    – gerrit
    Aug 24 at 9:40










  • @gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
    – Underminer
    Aug 24 at 13:09










  • @gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
    – KRyan
    Aug 24 at 14:14










  • (That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
    – KRyan
    Aug 24 at 14:15














up vote
10
down vote













Defining an opinion poll as "an assessment of public opinion obtained by questioning a representative sample," the answer to your question is no. Opinion polls are not always overly representative of extreme opinions.



If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average.



The problem arises when using voluntarily response survey methodology and is exacerbated by a low response rate. The problem of non-representative samples is not unique to opinion polls, but polls in general.






share|improve this answer
















  • 3




    This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
    – agc
    Aug 23 at 17:44










  • If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
    – gerrit
    Aug 24 at 9:40










  • @gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
    – Underminer
    Aug 24 at 13:09










  • @gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
    – KRyan
    Aug 24 at 14:14










  • (That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
    – KRyan
    Aug 24 at 14:15












up vote
10
down vote










up vote
10
down vote









Defining an opinion poll as "an assessment of public opinion obtained by questioning a representative sample," the answer to your question is no. Opinion polls are not always overly representative of extreme opinions.



If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average.



The problem arises when using voluntarily response survey methodology and is exacerbated by a low response rate. The problem of non-representative samples is not unique to opinion polls, but polls in general.






share|improve this answer












Defining an opinion poll as "an assessment of public opinion obtained by questioning a representative sample," the answer to your question is no. Opinion polls are not always overly representative of extreme opinions.



If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average.



The problem arises when using voluntarily response survey methodology and is exacerbated by a low response rate. The problem of non-representative samples is not unique to opinion polls, but polls in general.







share|improve this answer












share|improve this answer



share|improve this answer










answered Aug 23 at 15:11









Underminer

2014




2014







  • 3




    This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
    – agc
    Aug 23 at 17:44










  • If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
    – gerrit
    Aug 24 at 9:40










  • @gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
    – Underminer
    Aug 24 at 13:09










  • @gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
    – KRyan
    Aug 24 at 14:14










  • (That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
    – KRyan
    Aug 24 at 14:15












  • 3




    This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
    – agc
    Aug 23 at 17:44










  • If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
    – gerrit
    Aug 24 at 9:40










  • @gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
    – Underminer
    Aug 24 at 13:09










  • @gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
    – KRyan
    Aug 24 at 14:14










  • (That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
    – KRyan
    Aug 24 at 14:15







3




3




This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
– agc
Aug 23 at 17:44




This minimal jargon-free answer is the most helpful because it zeros in on response rate being key.
– agc
Aug 23 at 17:44












If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
– gerrit
Aug 24 at 9:40




If you were to poll 20 people in your company (selected randomly) and they all respond, your poll would not be overly representative of extreme opinions, on average, that would depend very much on the company or organisation. If I poll 20 people of Blackwater I may get very different views than if I poll 20 people of the International Peace Research Association. Certainly if my questions are related to war and peace, but probably also if they aren't.
– gerrit
Aug 24 at 9:40












@gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
– Underminer
Aug 24 at 13:09




@gerrit My intention in that toy example is to get a sense of opinions from the company only (e.g. do you think the CEO is doing a good job? Do you want Pizza Friday's?).
– Underminer
Aug 24 at 13:09












@gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
– KRyan
Aug 24 at 14:14




@gerrit It is implicit in the discussion of any sampling that, at best, the sample will be representative of the population sampled—claims generalizing outward from the population sampled must be worded very carefully and very weakly, because you are basically claiming your population is, itself, a representative sample of some even-wider population. It is not usually assumed that someone is making such a claim unless they explicitly say so. So Underminer here would be assumed—as they confirm—to be talking about representing that company’s employees and nothing wider.
– KRyan
Aug 24 at 14:14












(That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
– KRyan
Aug 24 at 14:15




(That said, misuse and abuse of statistics frequently takes the form of improperly generalizing from one population to a wider one, or a separate one altogether, and there is substantial room for improvement in the general public’s understanding of sampling and the problems that such abuses cause.)
– KRyan
Aug 24 at 14:15










up vote
3
down vote













This is a known problem with polls. Along with sampling error (they accidentally picked an unrepresentative group of people to call), polls may be skewed by people refusing to participate.



Pollsters attempt to control for both effects by comparing to more reliable data. For example, if it is a political poll, they may compare the demographics of the respondents to those of the region as a whole. In particular, they often try to control for political identification, that is to say, party registration. If their poll has too many Democrats to Republicans, they may decide to call more people looking for more Republicans. But of course that has a problem too.



What if the Republicans who answer the poll are unrepresentative of Republicans overall? For example, there is a group of Republicans called "Never Trump" Republicans. What if they answer polls more often than pro-Trump Republicans? This might introduce more skew.



Most pollsters use other demographic data. They'll often ask age for example. And they may also use other questions in the poll as controls. For example, they may ask someone's opinion on Donald Trump or how the person voted in the last election (or both).



Some pollsters do reference polls with the same people every time. These give baselines for how people should answer certain questions. They or other pollsters can then do comparisons of other polls to the baseline polls to look for skew. Or compare the baseline polls to actual data. For example, if you're polling an election, the election itself can be a base line to which the polls can be compared. Since who voted is generally public information, you can compare the actual demographics to the polled demographics.






share|improve this answer


















  • 1




    @Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
    – hszmv
    Aug 23 at 14:00






  • 2




    Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
    – De Novo
    Aug 23 at 21:04










  • -1 for the reason De Novo said. Error is not bias.
    – indigochild
    Aug 24 at 18:57














up vote
3
down vote













This is a known problem with polls. Along with sampling error (they accidentally picked an unrepresentative group of people to call), polls may be skewed by people refusing to participate.



Pollsters attempt to control for both effects by comparing to more reliable data. For example, if it is a political poll, they may compare the demographics of the respondents to those of the region as a whole. In particular, they often try to control for political identification, that is to say, party registration. If their poll has too many Democrats to Republicans, they may decide to call more people looking for more Republicans. But of course that has a problem too.



What if the Republicans who answer the poll are unrepresentative of Republicans overall? For example, there is a group of Republicans called "Never Trump" Republicans. What if they answer polls more often than pro-Trump Republicans? This might introduce more skew.



Most pollsters use other demographic data. They'll often ask age for example. And they may also use other questions in the poll as controls. For example, they may ask someone's opinion on Donald Trump or how the person voted in the last election (or both).



Some pollsters do reference polls with the same people every time. These give baselines for how people should answer certain questions. They or other pollsters can then do comparisons of other polls to the baseline polls to look for skew. Or compare the baseline polls to actual data. For example, if you're polling an election, the election itself can be a base line to which the polls can be compared. Since who voted is generally public information, you can compare the actual demographics to the polled demographics.






share|improve this answer


















  • 1




    @Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
    – hszmv
    Aug 23 at 14:00






  • 2




    Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
    – De Novo
    Aug 23 at 21:04










  • -1 for the reason De Novo said. Error is not bias.
    – indigochild
    Aug 24 at 18:57












up vote
3
down vote










up vote
3
down vote









This is a known problem with polls. Along with sampling error (they accidentally picked an unrepresentative group of people to call), polls may be skewed by people refusing to participate.



Pollsters attempt to control for both effects by comparing to more reliable data. For example, if it is a political poll, they may compare the demographics of the respondents to those of the region as a whole. In particular, they often try to control for political identification, that is to say, party registration. If their poll has too many Democrats to Republicans, they may decide to call more people looking for more Republicans. But of course that has a problem too.



What if the Republicans who answer the poll are unrepresentative of Republicans overall? For example, there is a group of Republicans called "Never Trump" Republicans. What if they answer polls more often than pro-Trump Republicans? This might introduce more skew.



Most pollsters use other demographic data. They'll often ask age for example. And they may also use other questions in the poll as controls. For example, they may ask someone's opinion on Donald Trump or how the person voted in the last election (or both).



Some pollsters do reference polls with the same people every time. These give baselines for how people should answer certain questions. They or other pollsters can then do comparisons of other polls to the baseline polls to look for skew. Or compare the baseline polls to actual data. For example, if you're polling an election, the election itself can be a base line to which the polls can be compared. Since who voted is generally public information, you can compare the actual demographics to the polled demographics.






share|improve this answer














This is a known problem with polls. Along with sampling error (they accidentally picked an unrepresentative group of people to call), polls may be skewed by people refusing to participate.



Pollsters attempt to control for both effects by comparing to more reliable data. For example, if it is a political poll, they may compare the demographics of the respondents to those of the region as a whole. In particular, they often try to control for political identification, that is to say, party registration. If their poll has too many Democrats to Republicans, they may decide to call more people looking for more Republicans. But of course that has a problem too.



What if the Republicans who answer the poll are unrepresentative of Republicans overall? For example, there is a group of Republicans called "Never Trump" Republicans. What if they answer polls more often than pro-Trump Republicans? This might introduce more skew.



Most pollsters use other demographic data. They'll often ask age for example. And they may also use other questions in the poll as controls. For example, they may ask someone's opinion on Donald Trump or how the person voted in the last election (or both).



Some pollsters do reference polls with the same people every time. These give baselines for how people should answer certain questions. They or other pollsters can then do comparisons of other polls to the baseline polls to look for skew. Or compare the baseline polls to actual data. For example, if you're polling an election, the election itself can be a base line to which the polls can be compared. Since who voted is generally public information, you can compare the actual demographics to the polled demographics.







share|improve this answer














share|improve this answer



share|improve this answer








edited Aug 24 at 23:22

























answered Aug 23 at 11:17









Brythan

60.7k7122213




60.7k7122213







  • 1




    @Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
    – hszmv
    Aug 23 at 14:00






  • 2




    Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
    – De Novo
    Aug 23 at 21:04










  • -1 for the reason De Novo said. Error is not bias.
    – indigochild
    Aug 24 at 18:57












  • 1




    @Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
    – hszmv
    Aug 23 at 14:00






  • 2




    Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
    – De Novo
    Aug 23 at 21:04










  • -1 for the reason De Novo said. Error is not bias.
    – indigochild
    Aug 24 at 18:57







1




1




@Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
– hszmv
Aug 23 at 14:00




@Typically the polls don't call more people of a different demographic, but rather control for the difference by weighting the responding statistics with the region's proportional statistics... So if more democrats responded to the poll, but the sample region has more Republicans, the responding samples are weighted to reflect the difference.
– hszmv
Aug 23 at 14:00




2




2




Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
– De Novo
Aug 23 at 21:04




Maybe I'm misinterpreting what you mean by "accidentally", but if a measurement from a random sample produces a statistic that differs from the population, that difference is due to random error, not bias. Bias is a systematic error, not an error due to chance. Sampling bias is when certain individuals are less likely to be included in a sample because of the sampling method.
– De Novo
Aug 23 at 21:04












-1 for the reason De Novo said. Error is not bias.
– indigochild
Aug 24 at 18:57




-1 for the reason De Novo said. Error is not bias.
– indigochild
Aug 24 at 18:57










up vote
-3
down vote













The TV series Yes Prime Minister (S01E02 "The Ministerial Broadcast") explains all about this:




Sir Humphrey Appleby: [demonstrating how public surveys can reach
opposite conclusions] Mr. Woolley, are you worried about the rise in
crime among teenagers?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there is lack of discipline and
vigorous training in our Comprehensive Schools?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think young people welcome some structure
and leadership in their lives?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do they respond to a challenge?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Might you be in favour of reintroducing National
Service?



Bernard Woolley: Er, I might be.



Sir Humphrey Appleby: Yes or no?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Of course, after all you've said you can't say
no to that. On the other hand, the surveys can reach opposite
conclusions.



[survey two]



Sir Humphrey Appleby: Mr. Woolley, are you worried about the danger of
war?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Are you unhappy about the growth of armaments?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there's a danger in giving young
people guns and teaching them how to kill?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think it's wrong to force people to take
arms against their will?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Would you oppose the reintroduction of
conscription?



Bernard Woolley: Yes.



[does a double-take]



Sir Humphrey Appleby: There you are, Bernard. The perfectly balanced
sample.




More often than not opinion polls support the views, desires or prejudices of the people that funded them. This may explain to a some extent the differences between the polls and the election results for Trump-Clinton and Brexit, for example.






share|improve this answer






















  • The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
    – JJJ
    Aug 23 at 14:40






  • 1




    Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
    – Jonathan Rosenne
    Aug 23 at 14:43










  • Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
    – JJJ
    Aug 23 at 14:51






  • 1




    youtube.com/watch?v=PKiTNC96yvs
    – Jonathan Rosenne
    Aug 23 at 17:49






  • 1




    @JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
    – Wildcard
    Aug 24 at 2:35














up vote
-3
down vote













The TV series Yes Prime Minister (S01E02 "The Ministerial Broadcast") explains all about this:




Sir Humphrey Appleby: [demonstrating how public surveys can reach
opposite conclusions] Mr. Woolley, are you worried about the rise in
crime among teenagers?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there is lack of discipline and
vigorous training in our Comprehensive Schools?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think young people welcome some structure
and leadership in their lives?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do they respond to a challenge?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Might you be in favour of reintroducing National
Service?



Bernard Woolley: Er, I might be.



Sir Humphrey Appleby: Yes or no?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Of course, after all you've said you can't say
no to that. On the other hand, the surveys can reach opposite
conclusions.



[survey two]



Sir Humphrey Appleby: Mr. Woolley, are you worried about the danger of
war?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Are you unhappy about the growth of armaments?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there's a danger in giving young
people guns and teaching them how to kill?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think it's wrong to force people to take
arms against their will?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Would you oppose the reintroduction of
conscription?



Bernard Woolley: Yes.



[does a double-take]



Sir Humphrey Appleby: There you are, Bernard. The perfectly balanced
sample.




More often than not opinion polls support the views, desires or prejudices of the people that funded them. This may explain to a some extent the differences between the polls and the election results for Trump-Clinton and Brexit, for example.






share|improve this answer






















  • The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
    – JJJ
    Aug 23 at 14:40






  • 1




    Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
    – Jonathan Rosenne
    Aug 23 at 14:43










  • Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
    – JJJ
    Aug 23 at 14:51






  • 1




    youtube.com/watch?v=PKiTNC96yvs
    – Jonathan Rosenne
    Aug 23 at 17:49






  • 1




    @JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
    – Wildcard
    Aug 24 at 2:35












up vote
-3
down vote










up vote
-3
down vote









The TV series Yes Prime Minister (S01E02 "The Ministerial Broadcast") explains all about this:




Sir Humphrey Appleby: [demonstrating how public surveys can reach
opposite conclusions] Mr. Woolley, are you worried about the rise in
crime among teenagers?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there is lack of discipline and
vigorous training in our Comprehensive Schools?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think young people welcome some structure
and leadership in their lives?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do they respond to a challenge?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Might you be in favour of reintroducing National
Service?



Bernard Woolley: Er, I might be.



Sir Humphrey Appleby: Yes or no?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Of course, after all you've said you can't say
no to that. On the other hand, the surveys can reach opposite
conclusions.



[survey two]



Sir Humphrey Appleby: Mr. Woolley, are you worried about the danger of
war?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Are you unhappy about the growth of armaments?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there's a danger in giving young
people guns and teaching them how to kill?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think it's wrong to force people to take
arms against their will?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Would you oppose the reintroduction of
conscription?



Bernard Woolley: Yes.



[does a double-take]



Sir Humphrey Appleby: There you are, Bernard. The perfectly balanced
sample.




More often than not opinion polls support the views, desires or prejudices of the people that funded them. This may explain to a some extent the differences between the polls and the election results for Trump-Clinton and Brexit, for example.






share|improve this answer














The TV series Yes Prime Minister (S01E02 "The Ministerial Broadcast") explains all about this:




Sir Humphrey Appleby: [demonstrating how public surveys can reach
opposite conclusions] Mr. Woolley, are you worried about the rise in
crime among teenagers?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there is lack of discipline and
vigorous training in our Comprehensive Schools?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think young people welcome some structure
and leadership in their lives?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do they respond to a challenge?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Might you be in favour of reintroducing National
Service?



Bernard Woolley: Er, I might be.



Sir Humphrey Appleby: Yes or no?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Of course, after all you've said you can't say
no to that. On the other hand, the surveys can reach opposite
conclusions.



[survey two]



Sir Humphrey Appleby: Mr. Woolley, are you worried about the danger of
war?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Are you unhappy about the growth of armaments?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think there's a danger in giving young
people guns and teaching them how to kill?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Do you think it's wrong to force people to take
arms against their will?



Bernard Woolley: Yes.



Sir Humphrey Appleby: Would you oppose the reintroduction of
conscription?



Bernard Woolley: Yes.



[does a double-take]



Sir Humphrey Appleby: There you are, Bernard. The perfectly balanced
sample.




More often than not opinion polls support the views, desires or prejudices of the people that funded them. This may explain to a some extent the differences between the polls and the election results for Trump-Clinton and Brexit, for example.







share|improve this answer














share|improve this answer



share|improve this answer








edited Aug 24 at 18:29









agc

3,8141243




3,8141243










answered Aug 23 at 14:14









Jonathan Rosenne

2676




2676











  • The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
    – JJJ
    Aug 23 at 14:40






  • 1




    Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
    – Jonathan Rosenne
    Aug 23 at 14:43










  • Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
    – JJJ
    Aug 23 at 14:51






  • 1




    youtube.com/watch?v=PKiTNC96yvs
    – Jonathan Rosenne
    Aug 23 at 17:49






  • 1




    @JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
    – Wildcard
    Aug 24 at 2:35
















  • The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
    – JJJ
    Aug 23 at 14:40






  • 1




    Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
    – Jonathan Rosenne
    Aug 23 at 14:43










  • Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
    – JJJ
    Aug 23 at 14:51






  • 1




    youtube.com/watch?v=PKiTNC96yvs
    – Jonathan Rosenne
    Aug 23 at 17:49






  • 1




    @JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
    – Wildcard
    Aug 24 at 2:35















The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
– JJJ
Aug 23 at 14:40




The episode you refer to is about different phrasings of questions to get a specific outcome. If you phrase questions right (e.g. who will you vote for, will you vote yes or no on the referendum) and make the questions public with the results (so they can be scrutinised) that problem shouldn't exist (or be known by those reading the results).
– JJJ
Aug 23 at 14:40




1




1




Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
– Jonathan Rosenne
Aug 23 at 14:43




Yes. But as Sir Humphrey said, there must be some honest pollsters around, he just hasn't happened to meet any.
– Jonathan Rosenne
Aug 23 at 14:43












Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
– JJJ
Aug 23 at 14:51




Not sure what you mean by honest. I think the series only mentions using specific wording to get a certain outcome. Are you suggesting pollsters make up their own results? If that were the case and it came out that pollster would be done (nobody would hire them as the results would be useless).
– JJJ
Aug 23 at 14:51




1




1




youtube.com/watch?v=PKiTNC96yvs
– Jonathan Rosenne
Aug 23 at 17:49




youtube.com/watch?v=PKiTNC96yvs
– Jonathan Rosenne
Aug 23 at 17:49




1




1




@JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
– Wildcard
Aug 24 at 2:35




@JJJ, if you think that a biased or fraudulent pollster has no use and would therefore never be hired, you are clearly unfamiliar with American media.
– Wildcard
Aug 24 at 2:35

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fpolitics.stackexchange.com%2fquestions%2f33157%2fare-opinion-polls-always-representative-of-the-extreme-opinions%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What does second last employer means? [closed]

Installing NextGIS Connect into QGIS 3?

One-line joke