We already know, or should know, that using prediction markets to forecast who will win what, as determined by a panel, is pointless. Remember last year’s markets? The Olympic site markets? Britain’s Got Talent? It really is a fool’s pursuit to try and out-guess the people that actually make the choice!
So, knowing that, at best, the Oscar prediction markets are mildly amusing diversions, I present a few interesting observations.
When we use prediction markets to make decisions, we usually make a decision based on the most likely possible outcome in the market. Consequently, in Oscar prediction markets, when we rely on the markets, we select the actor/movie that the market gives the highest likelihood of winning. As I have written before, in discrete markets, you will be disappointed using prediction markets.
Prediction markets at Inkling and HSX had a few amazing successes! Yes, once again, prediction markets have proven to be remarkably accurate predictors of slam-dunk outcomes. We can now say, at least anecdotally, that if an Oscar prediction market gives an outcome at least a 70% chance of occurring, we can rely on the market to pick the correct outcome.
Here are the markets that predicted an outcome with a 70%+ probability of occurring:
- The King’s Speech wins Best Movie (71.28% on hsx)
- Colin Firth wins Best Leading Actor (89.36% on hsx)
- Christian Bale wins Best Supporting Actor (77.92% on hsx)
- Natalie Portman wins Best Leading Actress (81.04% on hsx)
- Toy Story 3 wins Best Animated Feature Film (94.82% on Inkling)
- The Social Network wins Best Film Editing (76.29% on Inkling)
- The Wolfman wins Best Makeup (70.74% on Inkling)
- Inception wins Best Sound Editing (76.83% on Inkling)
- Inception wins Best Sound Mixing (77.53% on Inkling)
- Inception wins Best Visual Effects (93.51% on Inkling)
- The Social Network wins Best Adapted Screenplay (74.16% on Inkling)
- The King’s Speech wins Best Original Screenplay (71.52% on Inkling)
- The King’s Speech wins the Most Oscars (70.1% on Inkling)
There were a few “upsets”:
- Alice in Wonderland won for Best Art Direction (18.04% on Inkling), even though The King’s Speech (favourite at 38.25%) and Inception (26.68%) were more likely to win.
- True Grit was favoured to win for Best Art Cinematography (65.19%), but Inception (11.53%) did win.
- Alice in Wonderland won for Best Costume Design (31.27%), but The King’s Speech was favoured at 46.67%.
- The Inside Job won for Best Documentary Feature (30.78%), but Exit Through The Gift Shop was favoured (51.34%).
- Biutiful (34.94%) got beat out by In a Better World (24.98%) for Best Foreign Language Film.
- The Lost Thing (6.95%) pulls off a major upset against The Gruffalo (42.09%) and Day & Night (36.89%) to win Best Animated Short Film.
- The God of Love (12.08%) wins the Best Short Film, beating out front runners, Wish 143 (39.34%) and Na Wewe (27.13%).
There was another possible upset. The King’s Speech won the Oscar for Best Directing. Was it an upset? On the HSX, it was a bit of an upset. The Social Network was favoured at 54.44%, but The King’s Speech won with 33.48%. On Inkling, however, the two films each had an identical likelihood of winning, at 43.68%.
Getting Better All The Time?
In most prediction markets, we expect the forecast to get more and more accurate the closer it gets to the outcome being revealed. In the Best Directing Oscar markets (HSX), we saw the exact opposite! Basically, it was a two-horse race between The Social Network and The King’s Speech. The King’s Speech had been steadily becoming less likely to win over the last three weeks of trading. In normal markets this type of trend would require a steady diet of negative information. Logically, we would expect sudden jumps in likelihoods, when (if) significant information comes to light about which way Academy voters are likely to vote. I suppose it is possible for there to be a gradual revelation of information (say one voter/day discloses his vote), it isn’t likely. The Academy likes to keep these things secret until the show.
At any rate, the market was right, but trending wrong. Maybe there was some information that came to light, resulting in more uncertainty about the outcome. Then again, maybe the predictors were really just guessers, and the markets are simply aggregating “garbage information”. Garbage in, garbage out.
While this may not have been an upset, it does bring up another important issue. Two prediction markets trying to predict the same thing, unfortunately, the markets predicted significantly different likelihoods. There were many examples, here are but a few:
For the Best Original Screenplay, The King’s Speech had a likelihood of winning of 71.52% on HSX but only 53.99% on Inkling. That’s a difference of almost 18%. Seems quite high to me. The same thing happened with the Best Adapted Screenplay, where The Social Network won. This time Inkling predicted it with a likelihood of 88.93%, while HSX gave it a likelihood of only 74.16% (about a 15% difference).
Suffice it to say, the prediction market “industry” must find out why this happens and how it can be corrected. Otherwise, these types of markets should be abandoned for serious prediction purposes. What am I saying? These aren’t serious prediction markets! Okay, the industry needs to get to the bottom of this issue, so these types of markets can be used as fair betting markets.
There are several possible reasons for the different likelihoods, and none of them help the case for prediction market accuracy or usefulness (for these types of markets). I’ve discussed these issues in previous posts (too many to link to), so I won’t do so here. If you took the time to read The Wisdom of Crowds, surely, you can spend a couple of hours reading this blog to learn the reasons.
Something Doesn’t Add Up
Inkling’s prediction markets consider each award as a separate market, with each nominee being a separate “share” within the market. Accordingly, the sum of all of the likelihoods of the possible shares always add up to one (1.0 or 100%). However, on HSX, each nominee is a separate market. All of the markets (nominees) for a particular category are aggregated to show the results the same way Inkling does, but the sum of the likelihoods did not always add up to one. In fact, they were often significantly different.
For examples (Award, sum of likelihoods),
- Best Picture, 93%
- Leading Actor, 109%
- Supporting Actor, 110%
- Leading Actress, 111%
- Best Directing, 106%
Even though this is a phenomenon created by the structure of the markets, it still begs the question – why? Shouldn’t the markets have been arbitraged back to a total likelihood of around 100%? Not only did these discrepancies occur, they persisted! While I didn’t continuously monitor these markets, I did take snapshots at various times and the sum of the nominee markets rarely added up to 100%. If I start getting into all of the reasons why this might have happened, this would turn into a book.
No one told us the writers had gone on strike, again! A mere eight minutes in and we had barely cracked a smile. When we did, it wasn’t for anything either of the hosts said, it was for the wink that Anne Hathaway directed at Colin Firth (as the King) in the opening film vignette. Other than that, there was a lot of odd (not funny) banter between presenters and little to keep us occupied until the next Anne Hathaway appearance. Their writers were pathetic, but her makeup person seemed to be on his or her game. Note to the Academy: hire Randy Newman to write next year’s script. Either that or put Ricky Gervais on speed dial.
For the second year in a row, my picks (from the prediction markets) were better than my wife’s. All that’s left to be determined is my prize for this feat.