Posted by: Paul Hewitt | April 25, 2009

Judging Accuracy in Prediction Markets

I’ve had a chance to review Emile Servan-Schreiber’s paper, Prediction Markets:  Trading Uncertainty for Collective Wisdom.  The paper indicates that it will be included in an forthcoming book on Collective Wisdom.  It summarizes some of the evidence in support of the accuracy of prediction markets.  I wholeheartedly agree with the author’s contention that diversity is a key determinant of prediction market accuracy.  I agree that characterizing prediction markets as being more like “betting exchanges” is appropriate, too.  However, I disagree that the established research has proven the case for the accuracy contention, as I hope to explain, below.

While the paper is a good summary of many of the key aspects of prediction markets, in arguing that prediction markets are accurate forecasting tools, the author cites the HP prediction market results (6 out of 8 performed better than the “official” forecasts) as one of the proofs.  It has been a decade since these prediction markets were run and still it is one of the most frequently cited proofs of prediction market accuracy.  However, the author denigrates this finding, somewhat, by noting that “beating official company forecasts isn’t always as hard as it sounds, because the goal of an official forecast is often more to motivate employees towards a goal than to predict outcomes.”

It appears that the author is saying that it shouldn’t be too difficult to beat an “official” forecast, because it is biased (in order to motivate).  Depending on the definition of “official forecast”, to some extent I might be able to agree with this assessment.  However, the fact that HP’s prediction markets did not beat the official forecast in every instance speaks to the contrary.  Furthermore, if it isn’t that difficult to beat an “official” forecast, why didn’t the HP markets do so by a significant margin? Prediction markets are supposed to reduce the bias inherent in other forecasting methods.  If the official forecasts are biased, we should not be comparing them with prediction market forecasts at all.  The true accuracy of  prediction markets depend on their ability to accurately and consistently forecast actual outcomes. If alternative methods are not trying to predict the same thing, we shouldn’t be comparing them.  Here are my comments…

What, exactly, was the “official” forecast that was used in comparison with the prediction market forecast?

Was it an internal sales budget? Such budgets (forecasts) are routinely used to set target quotas for sales teams.  The bar is usually set a bit higher than it should be, to motivate the team to “try harder” to meet the objective and earn a bonus.  The budget cannot be too high (optimistic), otherwise it will have a de-motivating effect.  If we look at the eight HP prediction markets that had official forecasts, we find one that was almost bang-on, four that were significantly below the actual outcome and three that were above.  Of the three that might be considered “motivationally-inflated” official forecasts, two appear to be reasonable, with errors of 13% and 4%, but the third was overstated by a whopping 59%!  Three of the four understated official forecasts were significantly below the actual outcome (28% – 32%).  None of the understated official forecasts could be described as “motivational”. After all, you don’t lower the bar to motivate higher jumping.  We might have expected all of the prediction market forecasts to be 5%-10% lower than the official forecast (if it was a sales budget), but they were not.  Bottom line: Even if the official forecast was really a sales budget, it would never have been lower than the expected (most likely) sales outcome, nor should it have been too much higher.

Was it an “official” forecast provided to market analysts? Obviously, not all product sales forecasts are provided to analysts (though some are), but certainly, sales projections by product line or division would be commonly disclosed.  These figures would be derived by aggregating the sales projections of individual products or lines.  Corporate management are required to disclose all significant, relevant information (public companies).  If management were to issue inflated “official” forecasts to the market, the analysts would clobber the share price when the true sales (outcome) became known.  If management is consistently optimistic in their forecasts, analysts will discount their forecasts and take it out on the share price.  Management is unlikely to be consistently pessimistic as this would serve only to put downward pressure on their share value.  Analysts are able to spot a company consistently “jumping” over a “low bar.”  Bottom line: If the official forecast is the one that is publicly disclosed, it is likely to be close to management’s best estimate of the sales outcome.

Management needs to make a variety of decisions (production, distribution, marketing, sales, HR and finance, etc…) that depend upon the best estimate of future sales.  To make such decisions using a biased forecasts would be foolish and potentially very costly.  The important (useful) forecast is the one that will help management make better decisions.  This is the forecast that management needs to predict more accurately, not a “tool” such as a sales budget.

Given that HP used the term “official” to describe the forecasts that were being compared with the prediction market forecasts, it is likely that the official forecasts were the true best-estimates of the future outcomes.  If they were, in fact, merely sales budgets, we would expect the prediction market forecasts to always be lower than the budget, and this was not the case.  Consequently, if a prediction market is able to beat the “official” forecast, consistently, it should be considered a better forecasting tool than that used to generate the official forecast.

I have already written about my objections to the HP study, where I recognized that most of the prediction market forecasts appeared to be better predictions of the official forecasts than they were of the actual outcomes.

Since I’m discussing the “official” forecasts, here, I would add that the HP prediction markets were run before the “official” forecasts and some of the participants were also involved in the setting of the “official” forecasts.  No wonder these forecasts were correlated.  The slight improvement of the prediction market forecasts over the official ones may indicate the slight effect of the small amount of additional diversity in the prediction market group over the “official” forecasting group.  It could also be explained by the internal “political” climate that influenced the official forecast, but not the prediction market forecast.  Either way, it is not a sound comparison for proving prediction market accuracy.

We still have a long way to go in proving the case for enterprise prediction market accuracy.  I believe the academics have given sufficient theoretical support, but the real proof is in the field.

Advertisements

Responses

  1. […] UPDATE: Paul Hewitt’s take […]

  2. […] Go also read Paul Hewitt’s take on Emile’s latest paper. […]

  3. […] review of the literature and case studies (that have been published) indicates that prediction markets have improved the accuracy of forecasts, but the improvements have not been great enough to encourage widespread (or even minimal) […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: