Tuesday, May 6, 2014

Rookie's Guide to Political Forecasting


You don’t need to be a whiz kid like Nate Silver to predict the next election.



FiveThirtyEight’s ambition may be to expand the scope of data journalism but Silver’s famous prediction of every state’s winner in the 2012 election bolsters his reputation as a data hound.

Though he declares it an “overrated” accomplishment in the site’s opening manifesto, “What the Fox Knows,” it is why Washington eagerly awaits his contributions to the field of forecasting.

The site’s new model for the election will launch this summer but Silver’s predecessors and contemporaries are not waiting on the sidelines. Their method largely borrows from two traditions: handicappers and pollsters, who have already built predictive models to anticipate the political possibilities in the 2014 mid-term elections.

Handicapping
In election predictions, Moneyball comparisons abound. Handicapping is more an art of analysis than a science of statistics.

“Cook is a scout,” says Sean Trende of RealClearPolitics. “Nate Silver is the guy sitting with the computer saying ‘No, you can’t make this selection on how his girlfriend looks, he doesn’t get on base and that’s what wins games.’ There’s room for both.”
Handicappers like The Cook Political Report, Rothenberg Political Report and Sabato’s Crystal Ball assess candidates and polls to estimate what candidates have an advantage in a race.

Charlie Cook began handicapping congressional races in a newsletter in 1984. Today, he is an essential resource for political actors making decisions in elections.

“I don’t think of it as prediction as much as forecasting,” Cook says. “It’s almost like a meteorologist letting people know which way the winds are blowing. Then that can give you a sense of what’s coming.”

By interviewing candidates and evaluating constituencies, Cook filled a need for knowledge beyond opinion polls in elections.

While Cook wrote that not much will change in Congress in 2014, the human potential for upset drives Cook’s fascination.

“You can’t predict a Richard Mourdock or Todd Akin with a mode,” Cook says. “We are talking about human behavior here.”

History Sidebar
For the 110 years, the straw poll, a survey taken by a gathered group or mailing, was the only gauge of public opinion. The Harrisburg Pennsylvanian conducted the first political poll in Delaware in July 1824.

However, this polling relied on the faulty assumption that the bigger sample size, the better the poll.

From 1916 to 1932, Literary Digest correctly predicted five presidential victories in a row with its popular survey. But the magazine fell victim to its own success.


The survey—skewed by a wealthier and more likely Republican audience—wrongly predicted Alf Landon would beat Franklin Roosevelt in the 1936 election and the publication went out of business soon after.
At the same time, George Gallup conducted a more scientific survey with a demographic based sample and predicted Roosevelt’s victory. Gallup soon developed a random sample model that more easily duplicated and spread to numerous newspapers (not without its own occasional mishaps like below).


Polling Averages and Analysis
In 2002, RealClearPolitics launched a poll of polls, aggregating results from numerous pollsters, which became an indispensable means for political observers gauging public opinion.

“We run a simple average,” Trende says. “A fifth grader could do our calculations and everyone knows how we’re doing it, so there’s a basis for understanding.”

However, analysts like Trende dig deeper than that basic average today. His model recently pegged the Democrats’ likelihood of losing the Senate to Obama’s approval rating. He says Silver would be the first to admit his models are not magic.

“He’s taking a polling average and running a Monte Carlo average based off the odds.”
Trende explains that the model uses probability to assess the certainty of poll numbers.

“Let’s take the Arkansas Senate race,” says Trende, looking at the RCP average. “Tom Cotton vs. Mark Pryor is one of the hottest races in the Senate. The average has Cotton up by plus three and the margin of error is plus or minus two percent.”

Statisticians simulate thousands of elections with random numbers to see how often a poll result will translate to election returns.

“We can tell it to generate random numbers where Cotton shows at 45.7 percent in simulation and where Pryor is at 42.7 percent but I want to make sure that 95 percent of the numbers are within range,” Trende says.

Statistics classes call that determination range a “confidence interval,” meaning that in normal distribution of random numbers, analysts can be 95 percent confident the real result is in the experiment.

“So we’ll run 5155 simulations and ask, ‘how many of these simulations does Cotton lead in? Of these 5155 simulations, Cotton leads in 4299. So we can say we’re 84 percent confident that Cotton is ahead.”

Trende admits this can make political campaign results feel inevitable.

“Lay journalists hate sites like FiveThirtyEight because it doesn’t make for very interesting copy. There’s not a lot of potential for upsets.”

FiveThirtyEight’s rebirth
“It takes a bit of the fun out of the fact that most races are not going to be that close,” says FiveThirtyEight’s Harry Enten. “But this information allows someone to donate to campaigns wisely. It determines where campaign committees focus their resources.”

Enten says it’s important to make these predictive models transparent to voters.

“We’re giving the public an idea of what’s going on,” Enten says. “What we are really trying to do is bring methods out in the public sphere.”

Enten wants to make data accessible. He says FiveThirtyEight wants to show how to parse methods to distinguish good polls from bad polls. He wants the audience to become familiar with things that affect polls like what they measure (electability or approval?), how they collect data (cell phones or landlines?) and who they ask (likely voters or registered voters?).

As Silver’s team strives to account for each possible variable, Cook reminds that what makes elections exciting is that the unknown makes them unpredictable.

“There’s a Buddhist temple in Kyoto that has a rock garden,” Cook says. “The interesting thing about it is you can see all 360 degrees around but you can never see every single rock. There’s always one material fact that you can’t possibly know. That keeps you humble in this business.”