My wife follows movies a lot closer than I do. She thinks she can pick the Academy Award winners more accurately than I can. I took up the challenge, knowing that I could visit a few prediction market sites, like hubdub and hsx. I’m writing this portion of the blog before the Oscars take place. I also wrote the title before the results were in.
My picks were the front-runners in each award category, based on the market predictions on Saturday morning. I should note that this took me all of about five minutes (compared with my wife’s hours and hours of reading and actually watching all of the movies).
Now for the results…
I’m happy to report that the prediction markets for this year’s Oscars were 100% accurate! I wasn’t very surprised, really, but my wife is still very skeptical about prediction markets. How can this be?
The HSX prediction markets were not very good at picking the winners of the Best Screenplay awards. Inglourious Basterds was “supposed” to win (52.24%), but The Hurt Locker (25.72%) did win. Another oops. Up in the Air was supposed to win (63.16%), but Precious did win (and it was only expected to win 7.5% of the time)! At this point, my wife is gloating about how crappy these prediction markets are at picking winners. While they were handing out a bunch of lesser awards, I tried to explain to her that the prediction markets were still perfectly accurate, even though a long shot actually won.
I explained to her the concept of calibration, and how the markets were really accurate, because they were not picking winners with a 100% certainty. In fact, the markets’ failures were validation that they were, in fact, accurate. She thinks I’m an idiot (about prediction markets).
Up won for Best Animated Feature. It was expected to win 98 out of 100 times (if it were to be nominated 100 times, that is). Christoph Waltz won with 87% for Best Supporting Actor. In these cases, the markets were both “accurate” and making accurate, useful predictions. My wife’s not impressed. Everyone picked those categories, apparently. There were no other surprises in the Oscar Awards.
Essentially, when a prediction market picks Mo’Nique to win the Best Supporting Actress Award with an 86% likelihood, she would be expected to win the award 86 times out of 100 Oscar ceremonies. Of course, it isn’t possible to nominate her for the same role (along with the other nominees) every year for 100 years, to test the calibration of the market. However, if the market were well-calibrated, Mo’Nique would lose the Oscar 14 times out of 100. The market will still be considered “accurate” but fail to predict the winner when she loses. Expressed another way, when Mo’Nique loses, it helps validate the accuracy of the market (so long as she loses only 14 times in 100).
Unfortunately, we don’t know which 14 of the 100 trials will be losses. Consequently, we are going to be disappointed when the losses occur. This is why my wife is skeptical about prediction markets. In a horse race, like the Oscars, coming close to winning means nothing. Apparently, coming close means you’re an idiot.
We tied in our correct picks. However, I “won”, because I made my picks in five minutes and used the time I saved to work on my golf game.
As a side note, the predictions between hsx and hubdub were consistent. Virtually all similar prediction markets generated expected probabilities within 5%. Not bad, I suppose.
Though we can’t prove it, I’ll stand by my title and state that the prediction markets were 100% accurate. But I’ll qualify this by saying they were not very useful. If I can’t convince my wife that prediction markets are useful (she’s a corporate executive), I don’t see much of a future for enterprise prediction markets – at least not for the “horse race” types of markets.