Posted by: Paul Hewitt | April 10, 2009

Testing Prediction Markets?

Chris Masse (Midas Oracle) commented on this article, (Putting Predictive Market Research to the Test), calling it “truly bizarre research.” He’s right. It’s not a test of prediction markets at all.

I’m hard pressed to figure out where to start in critiquing this “research”. So, let’s begin with the fact there was no prediction market involved. Instead the researchers asked the participants what they thought their peers would do and compared the result with what the participants said they would do. Without a prediction market to aggregate the responses, we really have two polls going. Given the low cost of operating a real prediction market, why was one not used?

 

Next, we have the fact that all of the participants are oncologists. I think it is safe to say that this is a fairly homogeneous “crowd”, highly likely to be deficient in diversity, a pre-condition for prediction markets to operate effectively. The problem with using such a homogeneous group of participants is that many (most?) will have the same “pieces” of the puzzle to be determined, rather than having a diverse group that has many more pieces (however small) that would be aggregated into the outcome prediction.

 

There was no actual outcome in the study. It was a hypothetical treatment. The study’s authors draw conclusions about participant behaviour that are irrelevant. It seems to depend on the oncologists’ personal treatment approaches and opinions and the order in which the questions are posed. All the more reason for using a larger, more diverse “crowd”. They argue that in some cases, the predictive market result was “more optimistic” than that from the individual responses. In other cases, this wasn’t so. One result may have been more optimistic than the other, but which was more right? With no actual outcome, we will never know from this study. The authors note that traditional, survey-type responses, about what someone says they will do and what they actually do, are usually heavily discounted (as much as 50%). In short, such responses are unreliable. To compare the “predictive market” responses with these traditional responses, as they did in this study, is kind of ridiculous.

 

The study indicates that the predictive market results had “tighter” distributions, and concluded that fewer participants could be used to generate predictions (thus would save money in the future). False. Just because the distribution is tighter, does not mean you can use fewer participants. The more homogeneous the group, the tighter the distribution. A very small group may have a very tight distribution (or it may not). Furthermore, you really do need a “crowd” to run a prediction market. Optimally, we don’t want a “manufactured”, “tight” distribution, we want a good estimate of the true distribution.

 

Next time, they should run a real prediction market on a potential new treatment and compare the prediction with that obtained using a “traditional” forecasting method. Both predictions would be compared with the actual outcome (once known), to determine which provided the better predictive accuracy. That would be a true test of prediction markets.

Advertisements

Responses

  1. Good blog. See PM Summit – NYC.

    http://www.pmcluster.com/

    -j

  2. […] review of the literature and case studies (that have been published) indicates that prediction markets have improved the accuracy of forecasts, but the improvements […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: