I suppose I should be flattered when another author makes reference to, and adopts, a concept that I developed. But surely, half the fun comes from the formal citation showing where the brilliant idea was found! Alas, such was not the case, when I read the recent Forrester Research Inc. report: How Prediction Markets Help Forecast Consumers’ Behaviors, by Roxana Strohmenger.
In discussing the principles that help ensure prediction markets provide accurate predictions, the author makes reference to “information completeness”, in the following passage:
At the end of the day, a prediction market must have sufficient “information completeness” even if the individuals interacting in the market do not, to accurately predict outcomes with a reasonable degree of certainty.
Here is the passage where I introduced the concept of “information completeness”:
Prediction markets must have sufficient information completeness to accurately predict outcomes with a reasonable degree of certainty.
I added the bold italic parts to show the exact same words in each paper. I’m still flattered, just a bit miffed.
Galton’s Ox Revisited
One other interesting point in the paper concerned a reference to a recent test in the Netherlands that tried to replicate Galton’s ox experiment (James Surowiecki, The Wisdom Of Crowds). Using 1,400 guessers (oops again, I mean participants), the average estimate of a cow’s weight was 552Kg, but the actual weight was 740Kg. The guessers were off by a full 25%! How could this happen?
The average guess of Francis Galton’s townspeople was remarkably accurate (1,197lbs vs. 1,198lbs). Clearly, the townspeople were a bit more knowledgeable about the likely weight range of a butchered ox than the Netherlands guessers were about the weight range of a cow. The author of the Forrester paper calls this “perspective“, which is a good word for it.
I called it having a minimal level of information about the subject in order to make a prediction. If you think about the problem, logically, when the townsfolk made their estimates, there was a fairly narrow range of possible weights from which to choose. We would expect a normal distribution of guesses that would centre around the true weight, given reasonably small estimation errors (which cancel).
The cow guessers didn’t have a narrow range of possible weights (they actually guessed between 108 and 4,500 Kg.)! The errors would have been much more significant, on average, and much less likely to cancel out when aggregated.
Interestingly, there must have been a few knowledgeable cow weight estimators among the 1,400. Would a prediction market have provided a more accurate number than the simple aggregation of estimates? That would have been an interesting follow-up experiment.
On a humourous note, this research paper is the first I’ve read on prediction markets that does NOT mention Robin Hanson. How can this be?