Adam Smith (1776) detected the “invisible hand” that seemed to be able to efficiently allocate resources in free markets, without the intervention of a central “planner”. Against the backdrop of the USSR and its planned economy, the neoclassical model provided a substantial improvement in explaining the allocation of scarce resources among competing uses. And so until very recently, most developed-country governments (and their constituents) embraced this model of economic theory as if it was, in fact, the way economies worked. Many policies were designed to “free up” markets by eliminating constraints to economic activity. The objective was always to allow Adam Smith’s “invisible hand” to do its magic for the benefit of all.
Most people who try to understand “economics” do so within the traditional, neoclassical framework that has dominated economic thinking for most of the last century or so. This is the theory that is taught at the introductory and intermediate level economics. Even those who have yet to study economics have come to embrace this model of economic thought. They have been swayed by political and cultural institutions, media and the like to believe that the “machine” just needs to be oiled, gassed up, and the speed limits removed for economic prosperity to come to all.
Until very recently, if you need any proof of this, all you had to do was review media reports and political commentary. You would get the impression that Adam Smith was the greatest, smartest economist of all time, and that all we needed to do was follow his principles more closely and the economy would flourish. They were advocating an almost religious belief in the “invisible hand”. Scary. Now, with all of the economic problems that have come to light, we are beginning to see a change. More people are coming to see that the “economy” doesn’t work like the model, and no amount of tinkering will bring back the ideal model world. Why?
Essentially, the neoclassical framework is a model of the economy that is simple enough to be understood, yet sufficiently robust to be able to describe and explain basic economic phenomena. The real world is highly complex and likely impossible to model, without making generalizing assumptions about markets and their participants. Among others, the neoclassical model assumes perfectly competitive markets, firms that always attempt to maximize profits, homogenous households that always attempt to maximize their utility, and perfect information. In addition, there are no externalities and all markets always clear. The introduction of a “shock” to a market, results in an immediate jump to a new equilibrium.
Introducing Information Economics
Born out of a disillusionment with the ability of neoclassical models to explain real world economic phenomena, a new paradigm emerged, the role of information imperfections in understanding economic conditions. Joseph E. Stiglitz, Nobel laureate in 2001, identified information imperfections as one of the main reasons why the neoclassical model failed to explain many real economic conditions. Where neoclassical models were characterized by a single market clearing price at equilibrium (quantity supplied equals quantity demanded), Stiglitz proved that with imperfect information, not only would markets exhibit a distribution of prices, but an equilibrium may not even exist and markets may not clear. In extreme cases, caused by information problems, markets may be thin or fail to function at all. I could go on about the achievements in information economics, but for now, these findings have particular relevance to our study of prediction markets.
Implications for Prediction Markets
Much of the theory behind prediction markets rests on the standard neoclassical model of markets. Buyers and sellers interact, resulting in a market clearing price, which incorporates all of the available information about the market. But, from information economics, we find that even small information imperfections can have profound effects on market functioning. If the neoclassical model often fails to explain real world economic conditions, how can we expect prediction markets, based on the same theory, to explain, or describe, the underlying reality of their markets?
In the real world, information imperfections lead to market price distributions, rather than a single market clearing price. Similarly, if prediction markets function like other product or asset markets, inevitably, the result will be a distribution of prices.
If prediction markets do not reach an equilibrium, and the market is characterized by a distribution of prices, can we still rely on the market “price” to convey all of the information available to the market? If so, which price should we use? How will we know when the market has incorporated enough information for the current price (or distribution) to be “accurate”?
I’m afraid I don’t have the answers to these questions. Perhaps a group of information economists will join the discussion.