Posted by: Paul Hewitt | March 14, 2009

Measuring Market Entropy in Prediction Markets

I came across this blog entry on Inkling Markets’ new support site:

Measuring Market Entropy by James Hilden-Minton, Ph.D.

He proposes using an entropy metric to measure market uncertainty in prediction markets.  While it is an intriguing idea to come up with a measure of uncertainty in these markets, I’m not sure whether this is the one that will do the trick.

Here is my response.

I, too, would like to see a metric to track the uncertainty surrounding market predictions.  The concept of entropy has some appeal, but I’m not sure how it might be applied in a prediction market marketplace.

 

Theoretically, market entropy will start out high and approach 0 as information is incorporated into the market, but very few markets actually achieve a 100% likely outcome before trading is suspended.  So, there will always be a positive entropy metric for a market.  Even where the market is almost, positively, certain of an outcome, there will be a fairly high entropy metric (relative to the range of entropies between the minimum and maximum values).  How do we determine how much entropy is too much?

 

If you want to compare entropy across a variety of markets, you would need to standardize the metric.  I imagine this might require using logarithms with the base equal to the number of possible outcomes (binary market = base 2, American Idol = base 36?!). This will ensure that the maximum entropy possible in every market is 1.000.  Then, we might compare entropies between markets.  But this, too, has problems. 

 

Consider two markets, one binary, the other has three outcomes.  If the binary market uses log2 and the odds are even, entropy would be 1.000.  If the 3-outcome market uses log3, with even odds, the entropy would also be 1.000.  We should be able to compare the relative entropies of these two markets.  Now, after some trading, the traders sell off all of one of the three outcomes, leaving only 2 outcomes.  Now, the market is, essentially, a binary one.  If the remaining odds are 50% for each outcome in both markets, the measure of uncertainty should be the same, but they aren’t.  The 3-outcome market now has an entropy of 0.631 vs. 1.000 in the binary market.  Part of the problem is that the initial odds are not equal for each possible outcome (and they rarely will be).  You can carry this analysis further by making the odds for the two outcomes the same, in each market, and comparing the resulting entropy metrics.  For example, at 10% / 90% the entropies are:  0.469 (binary market) and 0.296 (3-outcome market).  The level of uncertainty is virtually identical, but the entropies are quite far apart.

 

Perhaps the answer is to track entropy from the initial entropy metric when the market opens with management’s best estimates of the initial probability distribution.  As the market moves the distribution, the entropy would decrease (hopefully).  This may provide a measure of decreasing uncertainty.  It may be able to show that the market is helping to decrease the measurement of uncertainty, relative to management’s best estimates.

 

There is also a problem with using entropy to determine when the market has incorporated all information (an equilibrium?).  For example in a binary market, entropy will be at its maximum when the odds are 50:50.  A slight change in odds will produce only a very slight reduction in entropy.  The problem is that this is almost the most uncertain condition, yet the entropy would be “flat”.  The entropy will change the greatest amount as one of the outcomes becomes most highly favored by the market (and there is less uncertainty surrounding the outcome).  Think of the US election, where there was only a small difference between the Democrat and Republican votes, yet it was a landslide.  The entropy would have been quite high, yet there was a significant “certainty” to the prediction.

 

It is, however, an intriguing concept that should be explored further.  My preference would be for a statistic that measures the dispersion of the probability distribution of outcomes, which could be tracked and compared between markets.

 

Just a few thoughts from a non-mathematician.

 

 

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: