Posted by: Paul Hewitt | March 8, 2010

Oscars Prediction Markets Get it Right

My wife follows movies a lot closer than I do.  She thinks she can pick the Academy Award winners more accurately than I can.  I took up the challenge, knowing that I could visit a few prediction market sites, like hubdub and hsx.  I’m writing this portion of the blog before the Oscars take place.  I also wrote the title before the results were in. 

My picks were the front-runners in each award category, based on the market predictions on Saturday morning.  I should note that this took me all of about five minutes (compared with my wife’s hours and hours of reading and actually watching all of the movies). 

Now for the results…

I’m happy to report that the prediction markets for this year’s Oscars were 100% accurate!  I wasn’t very surprised, really, but my wife is still very skeptical about prediction markets.  How can this be?

The HSX prediction markets were not very good at picking the winners of the Best Screenplay awards.  Inglourious Basterds was “supposed” to win (52.24%), but The Hurt Locker (25.72%) did win.  Another oops.  Up in the Air was supposed to win (63.16%), but Precious did win (and it was only expected to win 7.5% of the time)!  At this point, my wife is gloating about how crappy these prediction markets are at picking winners.  While they were handing out a bunch of lesser awards, I tried to explain to her that the prediction markets were still perfectly accurate, even though a long shot actually won. 

I explained to her the concept of calibration, and how the markets were really accurate, because they were not picking winners with a 100% certainty.  In fact, the markets’ failures were validation that they were, in fact, accurate.  She thinks I’m an idiot (about prediction markets).

Up won for Best Animated Feature.  It was expected to win 98 out of 100 times (if it were to be nominated 100 times, that is).  Christoph Waltz won with 87% for Best Supporting Actor.  In these cases, the markets were both “accurate” and making accurate, useful predictions.  My wife’s not impressed.  Everyone picked those categories, apparently.  There were no other surprises in the Oscar Awards. 

Essentially, when a prediction market picks Mo’Nique to win the Best Supporting Actress Award with an 86% likelihood, she would be expected to win the award 86 times out of 100 Oscar ceremonies.  Of course, it isn’t possible to nominate her for the same role (along with the other nominees) every year for 100 years, to test the calibration of the market.  However, if the market were well-calibrated, Mo’Nique would lose the Oscar 14 times out of 100.  The market will still be considered “accurate” but fail to predict the winner when she loses.  Expressed another way, when Mo’Nique loses, it helps validate the accuracy of the market (so long as she loses only 14 times in 100). 

Unfortunately, we don’t know which 14 of the 100 trials will be losses.  Consequently, we are going to be disappointed when the losses occur.  This is why my wife is skeptical about prediction markets.  In a horse race, like the Oscars, coming close to winning means nothing.  Apparently, coming close means you’re an idiot. 

We tied in our correct picks.  However, I “won”, because I made my picks in five minutes and used the time I saved to work on my golf game.

As a side note, the predictions between hsx and hubdub were consistent.  Virtually all similar prediction markets generated expected probabilities within 5%.  Not bad, I suppose.

Though we can’t prove it, I’ll stand by my title and state that the prediction markets were 100% accurate.  But I’ll qualify this by saying they were not very useful.  If I can’t convince my wife that prediction markets are useful (she’s a corporate executive), I don’t see much of a future for enterprise prediction markets – at least not for the “horse race” types of markets.


Responses

  1. Few surprises indeed. I used the predictive analysis the powerful search tool TipTop http://FeelTipTop.com provided me. My accuracy was close to 70% and I got most of the major categories right. Crowdsourcing does indeed work!

  2. 70% seems to be quite low, given that most categories had overwhelming favourites. Sorry to burst your bubble, but that’s not good enough under the circumstances.

    The problem with this type of prediction (i.e. the “horse race” type), if you’re almost right, you’re completely wrong. There are no prizes for second place in these events.

    Click “calibration” on the tag list and read.

  3. Thanks, Paul. I did not give myself credit for anything that was not exactly right. Also, by what appears to be your definition of accuracy, nearly any prediction is 100% correct if there is only 1 trial. Or, am I missing something?

  4. Shyam, that’s not quite correct. Think of markets like Oscar prediction markets as being similar to horse races. There is a substantial body of evidence to show that, if the betting indicates a horse has a 40% chance of winning a race, those 40%-horses tend to win about 40% of the races. When this holds true, the market is considered well-calibrated with the distribution of actual outcomes.

    The same concept applies to the Oscar prediction markets. The only problem is that we don’t really know whether these markets are well-calibrated. My saying they were 100% accurate was really a bit of a joke. There are 1,000s of horse races to use to test calibration. There are very few Oscars Awards to test the calibration of these prediction markets. I was simply assuming (to make a point) that the markets were calibrated. If this is true, the markets are “accurate”.

    I hope I was clear that, in these types of “horse race” markets, calibration is not enough for the markets to be useful for decision-making.

    If you are interested, keep reading some of my older articles on accuracy and calibration.

  5. Thanks for the clarification, Paul. I think I now understand better what you meant. Check out our new blog post at http://ftt.nu/Dm1j8 to see what we make so far of our success at making these predictions.

  6. I checked out your blog post. It would be interesting to find out why your prediction markets provided very different results from the ones on hubdub and hsx (not all, but more than I would have expected). On the other exchanges, many of the categories gave predictions with 67%+ expectations (almost all one). Some of your market predictions were not consistent with these. The question is why?

  7. Useful post,,
    I bookmarked your blog, Sir.. 🙂

    Best Regard
    Alfin

    http://freewallpaper3d.blogspot.com

  8. […] using prediction markets to forecast who will win what, as determined by a panel, is pointless.  Remember last year’s markets?  The Olympic site markets?  Britain’s Got Talent?  It really is a fool’s pursuit to […]

  9. […] Public Prediction Markets Fail, Oscars Prediction Markets Get it Right, and The Oscars 2011 – The Good, The Bad & The Ugly all speak to major problems with […]


Leave a reply to Shyam Kapur Cancel reply

Categories