Among other things, Robin Hanson is famous for advocating the use of prediction markets, where their predictions are “more accurate” than other methods of forecasting. I won’t argue with that, as long as the benefits of being more accurate exceed the marginal costs. However, if you’ve been keeping up with my blog, you should come away with the thought that I’m not quite as high on the prediction market fumes as some of the other adherents. I find prediction markets to be wanting in many significant areas.
The Search for Something Better
A few years ago, this got me to thinking. If prediction markets might be better than alternative prediction methods, could there be an even better model? And so, I scoured the literature in search of just such a model. I thought I had found one a couple of years ago, and set out to prove the case for its replacement of prediction markets.
In making my assessment of “better”, in terms of predictions, I considered the calibration of the predictions with the actual outcomes and how far in advance the calibration was reasonably accurate. I chose to consider the latter characteristic, because prediction markets are notoriously poor at being able to predict anything but very short-term outcomes.
I am pleased to report that my alternative prediction model appears to be better than prediction markets in most respects! My model was able to match the calibration of prediction markets in every case, but the real benefit was how far in advance my model was able to predict the outcome, with equal or better calibration than prediction markets! In all cases, my model was very well-calibrated with the outcomes a full two years prior to the outcome being revealed! To my knowledge, no prediction market has ever been well-calibrated two years prior to the outcome.
Not only that, but my model was able to achieve this level of accuracy for the most difficult to predict outcomes. Unfortunately, however, my model was not able to forecast so-called “easier to predict” outcomes with the same level of accuracy.
A Model Prediction Model
I’m sure I have kept you in suspense long enough. My model involves a hand, a wrist and a coin. Who knew that a simple coin toss might be as good, or better, a predictor of future events than a prediction market? Very difficult-to-predict binary events have a likelihood near 50%. If a prediction market for such an event indicates a 50.1% likelihood of occurrence, the decision-maker would predict that the event was going to occur, and he’d be right about 50% of the time. Same thing with the coin toss, but we can toss the coin two years before the event and get an equally well-calibrated prediction. For these really-hard-to-predict events, prediction markets, typically, fluctuate all over the map before settling on the safer 50% likelihood.
Earlier, I noted that the model does not work as well with easier-to-predict events, like for example, an event with a likelihood of 75%. Rest assured, I’m experimenting with a new version of the model which involves bending the coin with a hammer before the toss. I’ll let you know how that turns out.
One problem with the new model is that it only works on binary events. However, I’m working on an even better one that will work on a group of mutually exclusive and exhaustive events (winner-take-all). It involves darts and a dartboard.
Back to the Drawing Board
Obviously, this was intended to be a humorous post, poking a bit of fun at prediction markets and calibration. This is the lead-in to a series of upcoming posts, in which I hope to tie together the concepts of uncertainty, price distributions, calibration, accuracy, prediction market design, and market mechanisms. None of these issues has been adequately researched by the major players in the prediction market arena, and it is one of the major reasons why prediction markets continue to flounder. I hate to think that it is a fear of uncovering evidence that is not supportive of the use of prediction markets that holds back the researchers.