Dayton, Emmer, Horner: Who’s on first in battle of the polls?


Dayton’s ahead in the Minnesota Poll, Dayton and Emmer are virtually tied in the KSTP/Survey USA and Rasmussen Reports polls, and Horner is (a) far behind or (b) moving up—what do the poll numbers mean, why are they so different, and why are they important? 

Yesterday’s Minnesota Poll results in the Star Tribune showed:

In the three-way race, Dayton leads Emmer 39 to 30 percent, nearly unchanged from a July Minnesota Poll. Horner is at 18 percent, up from 13 percent in July.

The poll was in sharp contrast to the KSTP/Survey USA poll from mid-September:

Today, Dayton gets 38%, Emmer 36%; Dayton’s nominal 2-point advantage may or may not be statistically significant. But: Independence Party candidate Tom Horner today gets 18% of the vote, stronger yet among older, more reliable voters, complicating the calculus either major-party campaign needs to capture the open seat.

The Rasmussen Reports poll shows two different results: Emmer 42 percent, Dayton 41 percent and Horner 9 percent OR Emmer 36 percent, Dayton 34 percent and Horner 18 percent.

What they mean

Why are the polls so far apart? And how can you evaluate their results? One of the key questions is methodology. Pollsters do not talk to every person in the state. They may focus on a sample of the general population or a sample of likely voters. They may use personal interviews, phone interviews with actual conversations, or phone interviews with automated callers.

Methodology: The Survey USA poll used automated dialing and focused on likely voters. Its methodology is detailed here. Their interviews were “conducted by telephone in the voice of a professional announcer,” which sounds like a non-human interaction to me. One of the big issues: this method does not reach cell-phone-only voters.

The Minnesota Poll methodology information page refers to “interviews,” which sounds like actual human beings did the interviewing. They interviewed both landline and cell phone users.

The Rasmussen Reports poll offers the most complete description of their methodology, which sounds a lot like the Survey USA:

All Rasmussen Reports’ survey questions are digitally recorded and fed to a calling program that determines question order, branching options, and other factors.

Polling sample: All of the polls compiled their results from “likely voters.” The Survey USA poll started with 1000 adults and identified 656 likely voters. The Minnesota Poll started with 1227 interviews and identified 949 likely voters. Rasmussen polled the fewest, only 500 likely voters.

Margin of error: Robert Niles has a relatively simple discussion of margin of error, a statistical concept that legitimate pollsters always disclose and readers and reporters tend to ignore. Niles explains that a four-point margin of error “means that if you asked a question from this poll 100 times, 95 of those times the percentage of people giving a particular answer would be within 4 points of the percentage who gave that same answer in this poll.”

The Star Tribune poll cites a margin of error of 4.1 percent. That means, for example, that Dayton’s 39 percent could be as high as 43 percent or as low as 35 percent. The separation between the candidates is greater than the margin of error.

The Survey USA poll had a margin of error of plus or minus 3.9 percent. The separation between Dayton and Emmer in the Survey USA poll is less than the margin of error. Rasmussen, though polling only 500 likely voters, still claimed a margin of error of four percent.

Why are they so different?

It’s possible that voter opinion shifted dramatically in the ten days between the Survey USA poll and the Minnesota Poll, but that’s not likely. First, there were no major developments in the campaign. Second, both polls are taken repeatedly throughout the election season and neither shows a dramatic shift at this time.

Another possible explanation: the ground has shifted under the pollsters. Pollsters rely heavily on telephone interviews, and that may have worked better when most U.S. households had landlines. Now growing numbers of people are cell-phone-only users. It’s more time-consuming and expensive to locate and poll cell-phone-only users. Are they just like landline users, so that a poll of landline users will also reflect their views? Probably not. Young people and lower-income people tend to be overrepresented in the cell-phone-only group.

So which poll is more accurate? One way to tell is to look at their track records of accuracy in past elections. (Unfortunately, that information is not readily available.) Another way to tell is to wait and see which proves more accurate in this election.

Why are the polls important?

Ultimately, the poll that counts is the one on election day. Before that, the polls describe the “horse race.” The candidates and campaigns follow them for clues about whether their message is getting across, and sometimes adjust their message according to the poll results. (For an extended discussion of the problems with the Rasmussen poll in particular, and polling in general, see Eric Black’s MinnPost article.)

Polls are also important because of the bandwagon effect. Many voters want to vote for a winner, or think that a leader looks better. In the gubernatorial race, that’s a particularly important factor for third-party candidate Tom Horner. If poll results show him trailing very far behind the two major-party candidates, some voters will choose not to vote for him, on the theory that this would mean a wasted vote, having no impact on deciding the winner. But if poll results show him closing the gap, that alone is likely to convince voters to take him seriously.

Of course, all of the poll results could be wrong—as they were 12 years ago, when Jesse Ventura came from behind to win on election day.