Picking Apart Polls
By Blaire Ward, Evaluation Coordinator
January 12, 2017
Last week, the abstinence-only group Ascend released a report claiming that the majority of Americans oppose current sex education policy in the United States, and support “sexual risk avoidance” (abstinence-only) initiatives. Colorado Youth Matter immediately published a press release pointing out the flaws in Ascend’s conclusions, largely due to their sloppy and unethical methodology in conducting the survey.
Ascend’s report is reflective of the current culture of widespread misinformation. In a time when fake news dominates our news feeds and fact becomes muddled with opinion, it is increasingly important to analyze poll results with a critical eye. It is not enough to trust your favorite media outlets to accurately summarize research. As an organization that prides itself on our evidence-based practices, we think that it’s important that our community understands how to independently assess the veracity of a poll.
Polling is not an exact science (as illustrated by our recent presidential election), but there are methodical and ethical polls out there that are worth listening to, if you know how to find them. So if you’re ready to learn how to conduct your own analysis, check out the list below of some key indicators that you should look for in a poll!
There are certain practices that separate unethical polls from ethical polls. The American Association for Public Opinion Research (AAPOR) has a list of what information pollsters are ethically compelled to share with the public, including...
1) How many people did they poll? When a pollster wants to learn about a large population like ‘American females,’ it’s both unrealistic and statistically unnecessary for them to speak to every female in the country. Instead, they select a representational sample of this larger population and use their gathered data to generalize about the larger group. For example, The Pew Research Center typically polls 1,500 American adults when collecting data meant to represent the broader American adult population of ~235 million people. An ethical poll will tell you the sample size (how many people were polled). There are various sample size calculators that you might use when assessing a poll’s impact.
2) How did they choose their sample? There are a host of strategies that might be used to decide who will participate in a poll. Ethical polls will describe their sampling methodology so that you can draw your own conclusions about how generalizable its data is. For example, if a poll is meant to represent the attitudes of American adults, you would want to know the geographic makeup of the its participants.
3) Does the sample represent the population that the poll meant to draw conclusions about? A poll can only assess what people who participate believe. A good poll will select participants who will closely resemble and represent their larger group, but you should be able to assess their methodology yourself rather than assume that they used ethical practices. For example, data collected from an emailed survey of freshmen at CU Denver cannot represent all public college freshman, but data from a survey of freshman from multiple schools across the country might.
4) Who conducted the poll? Formal, research-minded from respected polling organizations are held to a code of ethics and are expected to adhere to academically-sound best practices. Informal polls are more casual; they may be distributed by an individual with a SurveyMonkey account or posted on a news website, for example. While they may not have the rigorousness necessary to be cited in an academic journal, polls on news websites are sometimes erroneously used to draw conclusions about public sentiment.
5) How was the poll conducted? An ethical poll will readily present information about its methodology. In recent tradition, polls have been conducted by phone or online. Both have merits and detriments that may affect how you regard the data it collects. For example, telephone polls allow pollsters to target specific area codes, but are often limited to landlines whose users tend to be considerably older than the general population.
If a poll or polling organization is reliable, you should be able to readily find the above information. If a glaring ambiguity exists, do not invest heavily in the poll’s findings.
What makes a question good or bad?
Once you understand how the poll was conducted, it’s important to scrutinize how its questions are framed. It is critical that questions are clear (can only be interpreted one way), concise (are only asking about one thing), and unbiased (not implying a ‘correct’ answer). It’s also important that surveys use straightforward language that isn’t open to personal or biased interpretation. Questions should not use vague terms like ‘ok,’ ‘safe,’ or ‘teens’ without defining them to the respondents. Surveys that deviate from these principles will have results that are muddled and uncertain.
For example, here’s a poorly written question from Ascend’s recent survey:
“The CDC reports that condoms offer limited protection against sexually transmitted diseases (STDs), and other contraceptives offer no protection against STDs. How important is it that teens know that condoms offer limited protection and other contraceptives offer no protection against STDs?”
This question is full of problems. First, it asks respondents to consider disagreeing with a large, respected, government institution. Respondents may feel that agreeing with an implied belief of the CDC is socially desirable and feel pressured to have their answer match. The question’s second half asks two questions at once. The pollsters should have asked “How important is it that teens know that condoms offer limited protection?” and “How important is it that teens know that other contraceptives offer no protection against STDs?” as separate questions. It is unfair to ask participants to give one answer to two questions.
In contrast, here’s a well-written question from a 2013 Gallup poll:
“Do you think that abortions should be legal under any circumstances, legal only under certain circumstances, or illegal in all circumstances?”
The question is clear and straightforward. It uses uncomplicated language to ask only one question. Respondents are asked to put themselves into distinct categories that both cover the spectrum of possible answers and do not overlap with each other. The simplicity of the question means that there is little room for confusion about what respondents meant with their answers.
In a Nutshell
Polls are a helpful tool for getting a snapshot about how people currently feel about a particular issue. However, it is important to recognize the large spectrum of methodologies used- especially within the broader post-election skepticism about the reliability of polling. Rather than dismiss all polls, let’s scrutinize each poll on its own merits, especially when poll results conflict with a larger body of research. Hopefully this blog gave you some tools and tips to determine which polls are trustworthy, and which ones aren’t worth your time!