A great article on polling in the Tampa Tribune

William March has a great article describing polling in plain language. It explains why polling is so widely accepted.

Here are some excerpts:

Understanding how polls work and how they’re done, experts say, can help people understand what poll results mean, even when they conflict.

For the most part, independent pollsters who do the typical media polls broadcast or published in major newspapers “do a pretty good job,” said Mark Abrahamson, head of the Roper Center for Public Opinion Research.

So why are there still conflicting results …?

Because science isn’t all that’s involved in scientific polling, [Mark Blumenthal, a former pollster who publishes the analytical Web site Pollster.com,] said.

“The whole notion that polls are ‘scientific’ comes solely from the theory of probability,” he said.

Probability is a branch of mathematics. Its laws say a sample chosen at random from a group has a predictable chance of reflecting the makeup of the group, depending on the size of the sample.

A poll with a sample of 1,000, for example, has a 95 percent chance of being accurate to within 3 percentage points, regardless of the size of the entire group.

The math is as reliable as two plus two – so reliable that court decisions allow auditors to testify based on random sampling of financial records, said Scott Keeter, chief pollster at the Pew Center.

“If you don’t believe in sampling, then the next time your doctor gives you a blood test, you should tell him to take all your blood,” Keeter said.

Now, to be fair, the article also says this:

But the mathematical laws assume the sample is truly random and the questions accurately elicit people’s views.

Getting that part right “is more art than science,” Blumenthal said. “In the real world, you can’t sample voters perfectly, and you can’t ask perfect questions.”

People also put off making up their minds on things and change their views as news events happen.

That makes early polls less reliable and more changeable.

It goes on to say:

In any poll, decisions about how to find and interview respondents affect the outcome. Some are obvious: biased question wording or a “self-selected” sample in which people respond voluntarily to a broadcast or published solicitation.

Others are more subtle:

•Voter screens. Should the poll include any registered voter or seek likely voters, and how do you find them?

“The vast majority of people say they’re going to vote, but obviously the majority don’t,” [Doug Schwartz, Quinnipiac’s polling director,]said. “Part of the art of polling is trying to identify who’s likely to vote.”

Blumenthal, who has surveyed major polling companies on their techniques, said Gallup asks respondents such questions as whether they know the location of their polling place and whether they voted in the last election or two. It ranks each respondent and then applies a cutoff, using only the top-ranked respondents in a percentage matching the expected voter turnout.

Ah, but Mr. Schwartz some states allow public access to voter files. In Florida, CNN and other media companies sued for access to Florida’s file because of a controversy that arose over the purging of convicted felons from the list. The voter file includes voter history. Political consultants know that a voter’s history is a significant gauge of the likelihood that voter will vote in a future election. So, polling companies can dispense with crafting a poll which may or may not accurately predict who is or is not a “likely voter” and simply select from a universe of voters filtered based on their voting history.

•Demographic balance. Certain categories of people are harder to contact – young people and minorities, for example. To make up for that, pollsters weight their results, counting those groups more heavily.

The technique, pollsters say, yields surprising accuracy.

It also raises questions: What categories should be weighted? What about political parties? Should the weighting match the population or just registered voters? There are no universally accepted answers.

•”Don’t know.” Voters aren’t allowed to say “don’t know,” so a large “don’t know” category cuts a poll’s accuracy in predicting an election outcome.

Pollsters debate whether to push “don’t know” respondents to answer. Not pushing may overlook people who are leaning strongly, but pushing too hard can force inaccurate answers.

Blumenthal said Washington Post polls tend to have small “don’t know” categories, an indication of pushing for answers, and Fox News has large ones.

•Languages. Brad Coker of Mason-Dixon said for any Florida poll to be accurate, only bilingual interviewers should make calls in South Florida. Even accents matter, he said – Cuban-accented interviewers may get better response rates from the large Cuban-American population.

Click over to the Tampa Trib site before the article gets buried in their archive.

About Jim Johnson

Editor and publisher of The State of Sunshine.
This entry was posted in Misc. Issues. Bookmark the permalink.

One Response to A great article on polling in the Tampa Tribune

  1. voxpop says:

    I can’t believe with all else going on in tampa (yknow dog bites, corruption, web legal-ese) that THIS took the front page of the Tampa Trib. Way out there.
    I guess they are about to release a bunch of UNbelievable polls to push people in one direction on mass transit and other items. It’s so transparent.
    Polls by some companies are like the 30-year city plan. Almost as good as the paper they are written on … (not quite)
    I go pretty far researching some of these companies and note this: even though zogby has sailed along on the good side of my radar … a friend lately mentioned to me that they had ‘found’ something or other UNbelievable. And, I began to believe that perhaps zogby built up all that ‘good cred’ so that they could push the BIG LIE past.
    And, it’s not my fault this happened to the world.

Comments are closed.