Posted by: Paul Hewitt | June 11, 2009

Why Public Prediction Markets Fail

Public prediction markets have the potential to be more accurate than enterprise markets, because they may be able to attract a larger “crowd” of traders, who may be able to aggregate a more complete information set, and the markets may be more efficient.  However, they often fail to predict actual outcomes, and their use in decision-making is dubious, at best.  They have some value as betting venues, if well-calibrated, and some nominal entertainment value (if you enjoy such trivial pursuits).  In contrast, the focus of enterprise prediction markets is on their value for decision-making purposes.

While my focus is on enterprise prediction markets, when public prediction markets fail to work properly, we need to understand why.  My attention has been drawn to a couple of recent cases (among the many) where public prediction markets failed (miserably) to predict the future outcomes.  They concerned the outcomes of American Idol (Betfair) and Britain’s Got Talent (Hubdub, Intrade).  Admittedly, these are very frivolous markets, but if prediction markets do work, shouldn’t they have been better at predicting the outcomes in these cases?  If public markets fail, why would we expect enterprise ones to work?  Both are based on the same theory.

Background – The Failed Markets

Betfair predicted Adam Lambert as the winner of season 8 on the American Idol show.  On May 19, he garnered 76% of the bettors’ money.  Kris Allen, the eventual winner, had 24%.  A few days later, Chris F. Masse blogged about the failure of prediction markets to select the eventual winner of Britain’s Got Talent, where the overwhelming favourite, Susan Boyle, failed to win.  On Hubdub, she closed with 78% of the trade money (none of the other nine competitors had more than 6%), while Intrade sent her off at about 49%.  Both of these prediction markets failed to accurately select the correct outcome.

Prediction Market Theory

Before trying to answer the question as to why these markets failed, we need to review the theory that supports markets having the ability to predict outcomes.  I’ll make this very brief, as I have covered this in my other posts.  I have also put together a companion post “A Lesson in Prediction Markets from the Game of Craps”.

The Efficient Markets Hypothesis holds that market prices accurately reflect all available information.  Since the prediction market shares (or “states”) have binary payoffs ($1 if right, 0$ if wrong), the market price should represent the likelihood of that state coming true when the outcome is revealed.  If the market is not efficient, the market prices will not represent an accurate reflection of the information available to the market.  Therefore, market efficiency is an essential condition for prediction markets to “do their thing.”

Let’s proceed under the assumption that prediction markets are efficient.  In a typical winner-take-all market, there are several shares (states) that may be traded.  Each share has a binary payoff.  Therefore, each share price represents the likelihood of that state capturing the true outcome.  Putting all of the states together provides the entire probability distribution of the market predictions, and with perfect, accurate, complete information, this distribution would be an exact match with that of the actual outcomes.  That is, it would be perfectly calibrated.  The dispersion of the predictions reflects the underlying uncertainty of the outcome.  This uncertainty is caused by future random events that might affect the outcome.

What if the information available to the market participants is incomplete?  By definition, there will be some piece of information that the market participants are unable to consider in making their investment/betting decisions.  Since the market prices are only able to incorporate known information, the market prices will be inaccurate.  Consequently, the market distribution will not match that of the actual outcomes.  As a result, the market must have sufficient information completeness.

If the essential conditions are satisfied, the market distribution will be well-calibrated.  This is the best case scenario for any prediction market.  But can it predict?

Professor Panos Ipeirotis provided an excellent explanation as to why prediction markets must fail to predict actual outcomes some of the time.  He points out that “such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism.” He is entirely correct. A prediction market provides a distribution of predictions that is a proxy for the distribution of actual outcomes.

What he means is that, if the frontrunner in a prediction market always wins, rational traders would always buy the frontrunner shares prior to the market closing, bidding up the price to $1, or just below that.  Any price significantly below $1 would indicate an inefficient market.  Let’s look at the first failed prediction.

American Idol (Betfair)

If Betfair’s market was efficient, Adam Lambert’s winning could not have been a sure thing.  There must have been uncertainty, and so, it is more than reasonable to say there was a 24% chance that he would lose.  If Betfair’s prediction market participants held accurate, complete information (collectively) and the market was efficient, we could say there was an unknowable uncertainty that prevented the market from pushing the Lambert price to $1.

From a decision-making viewpoint, we would be compelled to predict Adam Lambert as the winner (76% likely).  When we rely on a prediction of a discrete outcome, we need it to be correct almost all of the time, which means that the probability of the prediction must approach 100%.  By selecting Lambert, we will be either 100% right or 100% wrong when the contest is over.  There is no “almost right” with discrete outcomes.

We’re presented with a problem.  In order for prediction markets to generate accurate predictions, they must be efficient.  Such markets provide a market price that represents the probability of winning (in this case 76%).  If we needed to make a decision based on the outcome (yeah, right), we would like that probability to be closer to 100%.  However, if that were to occur, we’d hardly need a prediction market to point out the “sure bet”.  We could still make the decision, knowing that about one in every four seasons the “Adam Lambert” prediction will fail to win.  The problem is, we don’t know which season (or trial) this will happen.  This is the problem with discrete outcomes.

In some markets, future random events would introduce uncertainty as to the outcome.  As we all know, random events are unpredictable.  However, in this particular case, the prediction market closed shortly before the outcome was revealed.  The potential for random events to significantly affect the outcome would have been minimized.  Accordingly, I am left to conclude that the market participants did not possess sufficiently complete information about the outcome to make an accurate prediction or the market was not efficient.  In the latter case, the market would not have been good for a betting market, let alone a prediction market.

Public vs. Enterprise Prediction Market Design

Often Public prediction markets are designed as winner-take-all markets, with the shares (or states) corresponding to discrete outcomes.  Think of horses in a race or contestants on Britain’s Got Talent.  Many enterprise prediction markets also utilize the winner-take-all format, with the shares corresponding to ranges of a continuous variable (outcome), such as quarterly sales.  This difference in market design is one of the main reasons why public prediction markets fail to predict outcomes accurately.

Prediction markets may provide accurate distributions of possible outcomes.  Even so, the most likely prediction may not be useful for predicting the next actual outcome.  Where the outcome being predicted is a continuous variable (e.g. quarterly sales), if the market fails, but comes close, it may still be useful, whereas a market of discrete outcomes will only be useful, if it is virtually 100% accurate.

Let’s turn to the public markets for Britain’s Got Talent…

Contrast of Public vs. Enterprise Prediction Market

The prediction market for Britain’s Got Talent had 10 “horses”, the frontrunner being Susan Boyle, with 78% of the trades on Hubdub (49% on Intrade).  On Hubdub, none of the other contestants had more than 6% of the trades and two had 0%.  It was a reasonably tight distribution, yet again, the market failed to predict the actual winner.  Here’s a graph of the distribution of trades on Hubdub.

Hubdub Prediction Market

Hubdub Prediction Market

Chris F. Masse (Midas Oracle) gleefully points out these and similar prediction market failures, as he questions the accuracy and usefulness of prediction markets. He is correct in his reasoning that, if someone wants to rely on a prediction market to forecast an outcome, he needs to have a high level of confidence that the prediction will come true.  With discrete outcomes, even the slightest miss is 100% wrong.  Even when the decision-maker selects the most likely option and it fails to be accurate, it is of little consolation to tell him that the distribution of predictions is accurate.

Contrast this with a similar, winner-take-all market to forecast quarterly sales.  Let’s take the identical distribution of trades from Britain’s Got Talent and match them to a series of states corresponding to quarterly sales ranges.  Then, sort them to create a somewhat normal distribution, as shown.  In this example, quarterly sales is a continuous variable.  Accordingly, the prediction market provides us with a best forecast of $15.220M.  Without having to do the calculations, we can see that 90% of the time, actual quarterly sales will fall between $14.5M and $15.999M.  Given the mean prediction of $15.220, the maximum error (90% of the time) will be 5.1% or $0.779M.  This is the kind of prediction market accuracy that “can be taken to the bank.” Two markets with identical distributions: One predicts very accurately (continuous), the other is a bust (discrete).

Quarterly Sales Prediction Market

Quarterly Sales Prediction Market

Based on this, we can conclude that for most prediction markets involving discrete outcomes, the predictions will be questionable.

It appears that the uncertainty of the outcome depends on future events that may occur between the prediction time and the actual outcome.  When the outcome is revealed, there is no more uncertainty.  It makes sense, then, to say that uncertainty will be correlated with the time remaining until the outcome is revealed.  Therefore, as long as the future random events are reasonably unlikely to occur, or their effects will not be too significant, the prediction market may still provide a useful distribution of outcome predictions.

We see this in a wide variety of prediction markets, where as the market gets closer to the actual outcome becoming known, there are fewer random (unknown) events that might have a significant effect on the outcome.  Uncertainty is increasingly minimized the closer the market gets to the outcome revelation.

In the Britain’s Got Talent markets, we can see another problem with prediction markets – consistency.  Hubdub had Susan Boyle at 78% and Intrade had her at only 49%.  How could they have a 29% difference in the frontrunner likelihood?  Which one is more correct (less wrong)?  Are either of them “accurate”?  These are questions for another paper.

Implications

Prof. Ipeirotis is correct to require prediction markets to be efficient (he’s not convinced they are).  I may be correct to require information accuracy and completeness (at least a sufficient amount) to be contained among the participants.  These are the essential pre-conditions for possibly using prediction markets to accurately predict future outcomes.  Finally, Chris F. Masse is correct to require the prediction markets to have a high degree of confidence in predicting the actual outcome.  This pretty much precludes discrete outcomes from the public prediction market arena (except for “entertainment” or gambling purposes).

Prediction markets should only be used where they are efficient, participants have (collectively) reasonably complete, accurate information and the degree of randomness that is unknown is within an acceptable level at the time the decision is made.  The prediction market must be able to accomplish this feat sufficiently far in advance that the decision-maker is able to formulate an appropriate response to the predicted outcome.  Finally, the prediction market forecast must be more accurate (subject to cost benefit analysis).

Advertisements

Responses

  1. […] Why Public Prediction Markets Fail – by Paul […]

  2. […] See a strange post:  Why Public Prediction Markets Fail « Toronto Prediction Market Blog […]

  3. I’m confused… What law requires that one act based only on the market’s modal prediction, rather than feeding the market’s whole predicted distribution to an expected utility calculation? A discrete market that’s wrong 50% of the time should still be a win, if your unaided prediction is even worse. In other words, what real life uses of prediction (market-based or otherwise) aren’t equivalent to gambling?

    The closest explanation to yours that I can come up with is that it’s the discreteness of the space of available actions (not predictions) that matters. There’s significant overlap between problems that involve estimating a continuous variable and those that involve making continuous decisions; but in some cases the optimal strategy varies continuously with the probabilities assigned to discrete outcomes.

  4. I’m a bit confused as to why you would be confused! There is no law that requires using the mode or mean of a distribution of predictions, but it is the best choice when you are trying to forecast a future event. In your comment, the “expected utility calculation” is really the decision model of the decision-maker. The prediction market provides input to that model. The decision-maker would always want to use the most likely forecast, regardless of whether it maximized future utility.

    As to whether prediction (or other markets) are equivalent to gambling, I would agree with respect to these markets (and any similar ones). They only become good gambling options when they are “well-calibrated”. However, this does not make them good for decision-making, as we saw.

    If a discrete market is wrong 50% of the time, but it’s still better than another method, you should reject both methods. We should always try to keep in mind that the objective is to improve decision-making by finding ways to get better forecasts (if possible).

    Also, keep in mind I was looking at public markets. In EPMs, we would prefer to forecast a continuous variable (quarterly sales) rather than a set of discrete outcomes.

    Continuous forecasting is an entirely separate issue, which I will address in the future.

    Thank you for your comments.

  5. The reason for the failure of these two markets was simple-market participants did not have complete information. Specifically, no participants saw the actual voting numbers throughout the contest. Therefore, the market should not have been successful. You proved you point on why you need information completeness to have a successful prediction market.

  6. I was wondering if someone has yet worked on global prediction market models??

  7. […] thing that Susan Boyle lost Britain’s Got Talent, even though she had a 78% chance of winning.  Once we’re done explaining that, we can take a stab at explaining why there was such a wide variance between Hubdub’s (78%) and […]

  8. This is not how you evaluate the accuracy of prediction markets. It is only reasonable to compare the accuracy of a market against equivalent prediction methods like polls. I’d use the following steps:
    Step 1: Select various points in time before the event for which the accuracies should be compared for the polls and markets. For example, 1 week, 1 month, 6 months etc. before finals.
    Step 2: For each such time point, calculate the absolute error in prediction for both the polls and the markets
    Step 3: Calculate Mean Absolute Errors for each time point and compare markets vs polls at each time point.

    This follows directly from what Professor Panos Ipeirotis says. Absolute certainty approaching 100% is a pipe dream. The only thing you can say is if you consistently used prediction markets over polls, then in the long run you’ll come out ahead.

  9. Petro, I think you may have missed the point of the post. I agree with your comments as they apply to continuous variable outcomes, but not for discrete outcomes.

    When you’re wrong about a discrete outcome, you’re completely wrong. The best you can do in comparing accuracy with other methods of prediction is to compare the calibration of each method. The method with the better calibration will be the “more accurate” prediction method. Still, it may not be good enough for decision-making purposes.

    100% certainty is, not only a pipe dream, but also a useless ideal. If an outcome is that certain, we don’t need a prediction market, or any other method, to predict the future.

    As for prediction markets’ superiority over polls, it hasn’t been proven yet. Even if you can prove it in one case, it does not follow that it is proven for all cases.

  10. […] Why Public Prediction Markets Fail, Oscars Prediction Markets Get it Right, and The Oscars 2011 – The Good, The Bad & The Ugly all speak to major problems with relying on prediction markets.  All of the prediction markets noted in these posts displayed significant errors in predicting events, which is especially troublesome when dealing with discrete outcomes.  They also showed that these markets lacked the essential ingredients for success, that can be found in The Essential Prerequisite for Adopting Prediction Markets and The Forgotten Principle Behind Prediction Markets. […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: