May 22, 2008

Pollster Secrets Revealed!

By: Kristen Soltis

Aaron Sorkin’s film The American President tells the tale of the President of the United States, Andrew Shepard, a widower who falls in love with a lobbyist. The fallout from his romantic choices causes President Shepard’s approval ratings to fall from 63% to 46% (ouch!), creating a whole mess of problems with Congress and with Shepard’s re-election hopes. In the midst of this, to whom does President Shepard turn for answers?

The pollster.

An aide responds to the pollster’s assertion that the change in Shepard’s job approval could be due to national mood swings with the response: “Mood swings? Nineteen post-graduate degrees in mathematics, and your best explanation for going from a 63 to a 46 percent approval rating in five weeks is that the country is having mood swings?”

 

The pollster’s response? Typical – and perfect: “Well, I could explain it better, but then I’d need charts, and graphs, and an easel.”

 

Truth be told, easels are great. Powerpoint, too (though I admit to preferring Keynote). But reliance on these instruments makes polling seem inscrutable, like nuclear physics. The craft has been shrouded in complexity and obscurity for too long. Even some of the brightest minds in Washington have commented to me: “Oh, I just don’t understand numbers. I’m not a pollster.” With a flash of bright bar charts and a dazzling array of numbers, even the smartest political mind can feel hopelessly lost when staring at a book of survey crosstabs. (For those who don’t speak pollster, crosstabs are the books of numbers pollsters get back as kind of the raw “results” of a survey broken out by demographic groups.They’re often hundreds of pages long and can display more than two hundred numbers per page; they almost seem intentionally designed to look scary and intimidating.)

 

But (obviously) the pollsters don’t have it perfect. Take, for example, the 2008 New Hampshire Democratic Primary. A quick trip to Real Clear Politics and a little digging will bring up enough information to construct a veritable monument to one of the polling world’s recent moments of disgrace.

 

Polling conducted the three days before the election by even the most reputable of sources was dramatically, even embarrassingly, wrong. Rasmussen’s enormous 1774 likely voter sample had Obama up by seven points over Clinton. Reuters/C-Span/Zogby’s efforts were off even further, showing Obama with a 13 point advantage. What was the final result? Clinton, winning by 2.6%.

 

So what went wrong? If polling is such an intricate science, conducted by spreadsheet-wielding brainiacs trusted blindly by the media, what can the average political operative –- or even just a well-informed observer — do to properly evaluate the numbers he or she sees touted by the pundits?

 

Parsing a poll is, by and large, not rocket science. Certainly, the very best pollsters out there will go through mounds of respectable data, using complex mathematical methods and demographic breaks to tell a story about the electorate. There’s something to be said for experience and context, something that can’t be gained without serious study of the electorate and a lot of exposure to the numbers.

 

But many people — media folks, I’m looking at you — view polling as something to be conducted only by wizards with special glasses that enable them to read invisible text and divine immutable truths from books of crosstabs.

 

Wrong.

 

Alright then, when they show poll results on TV, or when some numbers are the top headline on Drudge, what should I look for to know if the numbers are bogus?

 

To protect yourself from buying into some of the junk floating out there, you first need to know as much as you can about the poll. Who was surveyed? What did the questions specifically ask? When did the poll field? How did they conduct the questionnaire?

 

The answers to these questions can give you some sense of whether or not the poll is helpful or useless. For instance: Is the survey sample “people who saw the poll on a news website and responded?” I’ll take a wild guess and say that’s not representative of America at large. Did the poll field last week? If so, it’s possible the polling team missed an important political event just now registering in the electorate.

 

Often — and cable news networks are particularly guilty of this — polling numbers will be flashed on the screen without showing how the actual question was worded. Taking these numbers at their word is like taking a marriage proposal from a blind date. Before buying into any numbers, you at least have to know what question was being asked.

 

Finally, there’s the tricky subject of data weighting. Most people don’t really know that almost none of the polling numbers they’re seeing are raw numbers – and that’s not a bad thing. Pollsters weight data to match things they know to be true about the population, like gender and racial breaks, or for many pollsters, party identification.

 

For example, you’re basically taking your data set and saying “Let’s say this sample looked exactly like the real population in terms of gender/race/party”.You’re adjusting the results to that the stuff you KNOW to be true so that everything else is more likely to be accurate. There ongoing debates amongst pollsters about how to weight some identifiers – party identification, for example – so check to see what weights the pollster decided to use in evaluating a poll.

 

In short: Find out as much as you can about the poll and see if anything just feels funny. A little common sense can go a long way in evaluating whether or not a poll is worth your time.

 

So these polls I see have a “margin of error.” Does that mean that the results are actually just a range, that the poll could be wrong by that amount if they took it again?

 

What margin of error means is that, at a certain confidence level, the actual “true” answer to how many people feel a certain way is within that range of the answer the survey got. Put simply, when you see that “plus or minus 3.1%,” it means that if you actually went and asked every single person in the population the question, there’s a really good chance (called “the confidence level”) that the response you’d get would be within 3.1% of the survey result.

 

The confidence level for most surveys is 95%, and it’s the probability that the “true” answer falls within the margin of error of the result you got from the survey. Basically, this means that if you re-did the survey 100 times, 95% of the time you’ll see a result within that margin of error of the “true” answer.

 

Confused yet?Here’s an example of what I mean.

 

Let’s say you survey 1,067 Americans, meaning your margin of error is 3%. You get the results and it turns out that 45% say their favorite color is red. This means that there is a 95% chance that if you asked every person in the United States what their favorite color was, between 42% and 48% of people would say red was their favorite color.

 

What you need to know from this is simple: The more people you’re asking, the closer to reality your answer probably is.

 

There are lots of polls and pollsters out there. Which ones should I listen to?

 

As far as public polls, your best bets are what I’d call “major media polls.” These polls are ones you’ll see conducted by cable news channels, broadcast networks, and newspapers. Few surveys are perfect, but these tend to release more information about how they operate so at least you know what you’re getting.

 

Pew and Gallup also produce an enormous amount of data and are usually good about giving access to questionnaires and methodological info.

 

Some colleges and universities like Marist and Quinnipiac also conduct surveys, and these can vary in reputability but are generally accurate.

 

Bad poll numbers are easy to find, but a well-crafted poll is often the best window anyone has into the mind of the electorate. And with a little bit of common sense, you won’t even need graphs, charts, or easels to sort out which is which.

 

Kristen Soltis works at a Washington-based polling firm. She also edits Doublethink’s Washington Planner blog. The views expressed here are her own, and are not necessarily the views of her firm or employer.