Saturday, February 9, 2008
It’s Saturday, less than a week after Super Tuesday. There are Democratic caucuses this weekend which haven’t happened yet, three primaries next week, and two really major ones, in Texas and Ohio, in the first week of March. The race for the Democratic nominee for President is razor close and far from over. MSNBC, the cable news channel, is reporting a poll that shows Senator Obama several points ahead of Senator McCain, were they to run against each other, but Senator Clinton even with McCain. Is there anyone reading this who believes the news of this poll won’t influence some undecided Democratic voters to support Barack Obama, fearing loss of the Presidency to the Republican candidate? Well, suppose this poll is wrong? Suppose it doesn’t accurately represent the mindset of the electorate today, let alone ten months from now which in politics is forever. What we’ve done, to no small extent, is turn over the election of the Democratic nominee to the news media which report these polls.
It’s election season, and polls are everywhere on the regular television and cable news, and in the papers and magazines. I’m used to it, but I’ve been wondering lately: What’s the media doing in the business of reporting voter preferences? Brace yourself. This is going to be more of an academic piece than I usually write.)
We know why the candidates take polls. However questionable, polls help the candidates plan and test the effectiveness of their campaigns. The candidates know the polls can be flawed and misleading, but they take them anyway because they are an important tool, not the only tool by any means, but an important source of information they are desperate to know. The better they can understand voter reaction to their candidacy, and to the competition, the more effectively they can craft their message and election strategy. It’s market research.
Journalists are supposed to bring us the news, to tell us what’s happening, without bias and as thoroughly as their particular medium will allow. As a free society, we believe that a better informed electorate will make more intelligent choices among the people who aspire to run our government, and the programs we will demand and support. Unfortunately, while polling is certainly a science – an amalgam of applied math (statistics) and the social sciences – it’s an imperfect one.
Think about it for a moment, about how hard it is to poll any electorate – even about something so apparently straightforward as “If the election were held today, for which candidate would you vote?” If only it were just that simple. Are the people you survey likely voters? How committed are they to whatever opinions they express? How volatile is their thinking, and what precisely could make them change their minds? Are they representative of all the people who will be voting, or have you inadvertently overlooked some key element of the electorate? What questions have you mis-phrased or omitted that might have suggested a different interpretation of your subjects’ intentions? Are they confused or purposing telling you one thing while they’re thinking another? “Would I have a problem voting for a woman or African American for President? Of course not. Don’t be ridiculous.” In fact, survey researchers are trained not to believe everything they hear, to use control questions to test the accuracy and veracity of their subjects’ responses.
Electorate polling involves highly subjective questions, the answers to which are understandably open to interpretation. Even the most skilled pollsters can be mistaken – let alone the journalists whose knowledge of the survey process is limited and second hand, at best. (When was the last time you saw the actual pollster discussing the results of his surveys on television?) It’s no secret that polls are inaccurate and sometimes outright incorrect. The media knows this, but uses them anyway – because they attract viewers? No wait, maybe that’s unfair. Polls are, after all, data which purport to describe how a campaign is going, and are arguably newsworthy from that point of view. The problem is that journalists – not commentators, but the ones who claim to be reporting the news – are supposed to tell us what’s actually happening. They’re supposed to be giving us facts, the tested validity of which meets some minimum standards. Reporting the results of pre-election polling, by its very nature and the imperfect science on which it is based, goes too far by blurring the distinction between factual reporting and speculation. It’s basically fortune telling by men and women in suits.
Accurate reporting is a responsibility which reputable journalists take seriously, which is why they’re so careful to verify what they tell us. Admirable, to be sure, but would they present other important information based on no more certainty than the reliability of, let’s say, the polling the preceded the recent New Hampshire primary? Yes, the media were apologetic after the primary, not just for the polls they commissioned and reported, but also for all the interpretation, extrapolation and conclusions they offered ahead of the primary – with what impact on the votes their viewers would cast? The apology was fleeting, and apparently of no particular consequence.
Some newscasters seem to think it’s okay to show us polls as long as they point out that there is a margin of error, that is, that the findings they are showing us are correct within a handful of percentage points either way. Even if the people watching are paying attention, and adjust their conclusions accordingly, which I doubt, they’re missing the point. “Margin of error” simply refers to how representative a sample may be of the population it represents. It’s only an average concept. It doesn’t mean that the population from which the sample was selected is the right one, or that the questions asked were well designed, or interpreted properly, or that the findings of the poll today will be relevant tomorrow when the actual polls open. In many of these cases, the undecided voter block alone is large enough to change the outcome of the election, even if the pre-election polls are dead on, which they’re not.
Warning us about the margin of error is just avoiding the question. Are the legitimate media applying the same rigorous journalistic standards to the polls they present, as they do when they vet the other information they report?
The polls may tell us what the electorate is currently thinking, but let’s be honest. The primary reason pollsters ask voters who they support before an election is to predict the results. Since when are journalists supposed to be in the business of predicting anything? Their job is to tell us what is happening, what they know for sure, and let us draw our own conclusions.
It wasn’t that long ago that the television networks were reporting exit polls before the precinct doors shut locally, and election returns from one state before the polls closed in other time zones. Maybe it’s time they reconsidered their policy of reporting polls for the same reasons.