Betting and Elections

The winner of yesterday’s election was … statistics.

While bloviating pundits went on and on about how close the election was going to be, some people actually used statistics to forecast the outcome. Perhaps the most famous election quant is Nate Silver. (I’ll be writing more about Nate Silver in a few weeks when I write a post about his book, the signal and the noise. Despite my admiration for Silver, I think he is a bit confused about the difference between Bayesian and frequentist inference. He is a raving frequentist, parading as a Bayesian, as I’ll explain in a few weeks.)

Silver uses all the available polls together with other background information, and then applies statistical methods to combine the information and make predictions. Over at Simply Statistics, there is a nice plot, which I reproduce here:

This plot is from “Simply Statistics”

The plot shows the voting percentage versus Silver’s prediction. Pretty impressive. Of course, Silver wasn’t the only one using statistical methods to make election predictions. See the Washington Post for some more.

Silver caused some controversy when he responded to criticisms of his predictions by offering to bet. Margaret Sullivan, the New York times ombudsman (or, ombudswoman, I guess) criticized Silver for offering the bet. But as Alex Tabarrok argued, offering to bet is a good idea. As Tabarrok puts it:

“A Bet is a Tax on Bullshit”

(This is one of my favorite quotes of the year.) Tabarrok goes on to say: “In fact, the NYTimes should require that Silver, and other pundits, bet their beliefs.”

I agree with this. Imagine if every pundit had to bet part of their salary every time they made a prediction.

I would go a step further and say that every politician should have to put their money where their mouth is. After all, most public policy consists of bets made with other people’s money. If the president thinks that investing in Solyndra is a good bet, then he should have to put up some of his own money.

Betting is a great test of one’s beliefs. I applaud Nate Silver for standing behind his predictions with the offer of a bet. We need more of that.

Edit: My colleague Andrew Thomas has a nice post about this: see

here

### Like this:

Like Loading...

*Related*

## 15 Comments

On reading the post, I was wondering if “A Bet is a Tax on Bullshit” idea can be used in someway to obtain some quality control over conference submissions.

Great idea. A good replacement for refereeing.

(Or at least, a different kind of refereeing.)

Technically I think that already exists in the form of rejected grants and poor career prospects, but perhaps a tighter feedback loop might have some nicer properties.

Nate Silver & Princeton Election Consortium founder Sam Wang were interviewed on NPR’s Science Friday two weeks ago. At that time, Nate’s forecast was about 70%, while Sam’s was about 90%. To demonstrate non-partisan faith in his model, Nate should have asked Sam for 9-1 or even 8-1 odds on Romney and taken them.

The value function for humans isn’t linear in dollars…

I wonder what you think of Robin Hanson’s futarchy idea.

As you might guess, I am indeed a fan of the futarchy idea.

I’m currently teaching a Bayes class to undergrad stat majors. For their final project, I had them do a Bayesian forecasting of the election with any data of their choosing. All of the students that used state-wise polling data predicted a >95% probability of an Obama victory (with data up to a week ago). My own model yielded a 99.0% posterior probability of Obama victory (http://jwrteaching.blogspot.com/2012/11/predicting-2012-presidential-election.html). A few of the students were stumped as to why they couldn’t get their models to produce anything near the supposed proximity of the race spewed by the mainstream media. It’s obviously clear now why!

Also, there is an easy way to place monetary bets on election predictions, it’s called etrade. Rumor is, somebody made $20,000 on election day by shorting Romney on etrade.

ahem, Intrade, not etrade. http://www.intrade.com/v4/home/

Great idea for a class project

I’m pretty sure you and your class were overconfident. Here’s a great evaluation comparing (so far) 538 and votamatic performance:

http://www.acthomas.ca/comment/2012/11/538s-uncertainty-estimates-are-as-good-as-they-get.html

Can you send your class’s project to Prof. Thomas to add to his study?

I gave my results to Andrew a few days ago and he said that they were pretty similar to votamatic’s. I think a lot of those differences are being driven by the non-swing states, for which there is not a lot of polling data (often zero polling data). I just did a blog post comparing my predictions on 538’s on 13 swing states: http://jwrteaching.blogspot.com/2012/11/election-results-fivethirtyeight-and.html

Upshot: both of our predictions are biased toward the Republicans. 538’s standard errors are way too large (all 13 states fall within +/- 1 standard deviation), and I’m killed by the bias.

I like your blog post!

I think there’s something missing in your analysis: you’re treating the states as independent outcomes. I suspect that Nate includes an uncertainty for nation-wide bias that he could only make explicit by publishing his covariance matrices. Since Nate only posted his diagonal entries, it makes his model look more conservative than it really is.

Thanks! That’s a good point about covariance between the states. Indeed, I treated them as independent in computing the Bayes Factor. I do have covariance info from my model because I treat the state-level parameter as the sum of a national parameter and a state-wise deviation, so the swing states tend to have mildly strong positive correlations. I assume that Silver’s model also induces positive correlations, which he doesn’t publish.

This and your post below are very thoughtful for an undergraduate course in statistics – especially the separation of variation and bias – at somepoint it would be nice to hear about how the students dealt with it.

## 2 Trackbacks

[…] saber quais ele realmente ele defende. De qualquer forma, não caberiar aqui discutir isso aogra (o Larry Wasserman chegou ao ponto de dizer que vai mostrar ao próprio Nate que ele não é baeysiano, mas sim que é um raving frequentista, desfilando como bayesiano. Vamos ver o que vai sair […]

[…] mas está aqui o post do Normal Deviate sobre estatística e o Nate […]