In his book, “The Wisdom of Crowds”, James Surowiecki gave an interesting and insightful account of the conditions for, and methodology of, predicting future outcomes. To summarize, markets are able to predict outcomes where there is sufficient diversity, independence and decentralization of the market participants. It is explained that to the extent these conditions hold true, the market will provide accurate predictions. It works, because the law of large numbers ensures that uncorrelated errors “cancel out”, leaving behind “pure information”, as reflected in market prices. Not only does it make a lot of sense, intuitively, there is ample support from economic theory.
Economic support for the efficacy of prediction markets ultimately derives from Adam Smith’s “invisible hand”, Hayek’s “The Use of Knowledge in Society”, and Eugene Fama’s Efficient Market Hypothesis. Taken as a whole, they support the position that market prices fully reflect all available information about the product or asset under consideration. A Prediction market uses this concept to make the same assertion about a future event, condition or action, to produce a “best estimate” of the uncertain outcome.
The Key to Market Efficiency
Kenneth J. Arrow and Gerard Debreu proved that free markets are able to optimally allocate resources, under certain circumstances. One of the key assumptions behind their general equilibrium theory is that all market participants possess complete information. Every trader in the market knows the price that each participant is willing to pay or receive for each good. Surowiecki described Vernon L. Smith’s classroom laboratory experiments that were designed to test the economic efficiency of markets. While these “markets” were highly simplified, they were able to show that markets can allocate resources efficiently, even when every participant does not have “complete” information. However, Surowiecki neglected to mention why the experiments continued to work when the information completeness assumption was not met.
They worked, because the market participants, collectively, possessed “complete” information, even though none did, individually. The market mechanism served to induce each participant to reveal his or her information in the marketplace, ultimately revealing the “complete” set of information through the supply and demand functions and the market clearing price. While these markets were very simple, with a single product and a relatively small number of participants, they did reveal the power of markets to assemble the information necessary to perform complex, efficient resource allocations. They operated as if all participants were privy to all of the information. They worked because all of the information was available within the group.
Out of the Classroom… and Into the Real World
In the real world, with significantly more complex markets, products and human relationships, the ability of markets to perform similar feats of information revelation is heavily dependent on the collective information held by the market participants. To the extent that the participants’ information (taken as a whole) is incomplete, this will be reflected in uncertainty, or price dispersion, in the market. Where there are significant pieces of information unknown to any of the market participants, the markets are highly unlikely to provide accurate prices or tight dispersions. In short, the market will not be able to create information that is not already known to the participants. Which leads us to…
The Forgotten Principle Behind Prediction Markets
Prediction markets must have sufficient information completeness to accurately predict outcomes with a reasonable degree of certainty.
Each of Surowiecki’s prediction market conditions (diversity, independence and decentralization) relates to this overarching principle and serve to improve the pool of information held by the market participants, but it is not enough to simply have them present; they must operate to amass a reasonable level of information “completeness”, too. Of course it is difficult to know, in advance, whether a pool of market participants holds enough information to be considered “complete”. Hence, we tend to rely on Surowiecki’s three conditions, but we have forgotten that they are really a collective proxy for information completeness.
Very simple markets, with few variables influencing the future outcome, are likely to provide accurate predictions from small groups of traders, because most of the information necessary to make the prediction is known within the pool. The more complex the factors affecting an outcome become, the greater the information required to be known by the pool of traders. One can easily imagine exponential growth in the information requirement, as the outcome becomes subject to additional causal factors.
Most (if not all) researchers and academics seem to have lost sight of this information completeness principle. We have seen a significant volume of work directed at solving the problem of market liquidity through the use of automated market maker mechanisms. These solutions only became necessary, because it was difficult to gather together a sufficient number of participants and provide adequate incentives for them to keep trading and revealing their private information. Some markets were simply too thinly traded to be “efficient”, without the use of an automated market maker. Of course, it was cheaper to operate a market with fewer participants, too.
While this solved the liquidity problem of some prediction markets, it created other problems. With fewer traders, there was often less diversity, independence and decentralization. All of these factors, combined with the fewer traders meant that information completeness was bound to suffer in all but the simplest of markets. Also, with too few traders, the process of cancelling out the uncorrelated errors of the traders breaks down. While such markets may appear to operate efficiently, this may not be so, and worse, we will not know whether there was sufficient information completeness within the market. Consequently, it may not be appropriate to rely on the predictions of such markets. Their predictions will be unreliable, inconsistent and subject to too much uncertainty.
My point, here, is that you can’t “fake” an efficient market and hope to achieve the level of accuracy and certainty that a truly efficient market might provide. An automated market maker may be acceptable, but not when it is used in place of a sufficient number of diverse, independent, decentralized traders. There is no way to replace or create the information that is not brought to the market by the traders themselves. However, an automated market maker mechanism may be acceptable when there are an insufficient number of active traders from time to time.
Where to Now?
If a reasonable degree of information completeness is a necessary precondition for prediction market accuracy, how will we know if it has been satisfied? As stated above, we don’t really know whether the condition is satisfied for most prediction markets. We have to rely on optimizing the quantity of traders, while maximizing their diversity, independence and decentralization, under a cost constraint. To a large extent, this requires trial and error in the field. Market specifications for one market may not work as well with other markets. It will be necessary to increase the number of real markets and learn what works, what doesn’t, and why.
Over time, it may be possible to identify trader pools that are particularly strong in predicting certain types of outcomes, because of their combined knowledge, diversity, etc… Over time, we may be able to identify certain types of outcomes that may be predicted with a reasonable degree of uncertainty (and others that are not so predictable – e.g. earthquakes).
With a greater number of real world prediction markets, we will learn more about the factors that enhance their calibration to actual outcomes. Right now, there are too few examples to say anything about individual market calibration levels. More trials will provide valuable insight into the factors that generate consistency in specific market predictions. So far, the published trials have not shown any reasonable level of consistency.
Is there an effective method of pre-screening traders that will help ensure that the total pool of information is maximized for a particular market (or class of markets)? This might be quite costly for an individual market, but if the costs can be spread over a class of markets, run multiple times, it may be worth the effort.
If a greater degree of information completeness helps reduce uncertainty, and it should, the resulting distribution of predictions will tell us whether the outcome is predictable with a reasonable degree of uncertainty. If a particular class of markets is unable to reduce uncertainty to an acceptable level, we can stop using it for predictive purposes. We may still be able to use it as a measure of uncertainty for risk management, however.