By Steve Kelman

Blog archive

'Wisdom of crowds' vs. group discussion

Shutterstock image (by Makkuro GL): crowdsourcing innovation.

(Image: Makkuro GL / Shutterstock)

Many blog readers have doubtless heard of the idea of the "wisdom of crowds," popularized by a 2005 book with that title by New Yorker writer James Surowiecki. The book makes an argument beyond just saying that "two heads are better than one" or that groups often make better judgments than individuals, including individual experts. It makes the more dramatic claim that simply averaging the individual judgments of many people about a question will produce better results than having people discuss their initial views and reach a common judgment. The idea of the wisdom of crowds lies behind the popularity of so-called "prediction markets," where many dispersed individuals bet on various outcomes, ranging from an election result to the probability that Russia will take over Ukraine or the price of oil in five years, with the relative "price" of various predictions reflecting a wisdom that, it is argued, may be better than individual expert predictions.

Some of the discussion of the wisdom of crowds has pitted the averaged judgments of large groups against those of individual experts. But the wisdom of crowds also confronts another common view, which is that group discussion will typically produce better decisions than simply averaging dispersed judgments of lots of individuals without discussion, as a prediction market does.

Recently at the Kennedy School, a young faculty colleague of mine, Julia Minson, a social psychologist now only in her second year at Harvard, presented her research on the latter question -- do you get better judgments about the correct answer to questions where there is a lot of uncertainty about what the correct answer is (e.g. Russian takeover of Ukraine or future oil price) from averaging individual judgments or by discussing those judgments in a group? The answer to this question has important practical value for a lot of decision making in government and other organizations.

Minson's research, which involves lab experiments, has two key findings. The first is that simple averaging performs dramatically worse than discussion in producing more accurate estimates when some members of the group have estimates that in fact turn out to be dramatically wrong, but it doesn't do badly if there are not such egregious errors among group members. The improved accuracy of discussion over averaging is due mostly to discussants giving greater weight to better information, not just simply the distortion in averaging coming from including the terribly wrong estimates in the average.

The second finding is that the accuracy improvements from discussion are larger when discussants do not reveal their estimates in advance of the discussion. The reason is that revealing estimates in advance tends to limit the range of options considered, which has a negative effect on accuracy. (Minson notes that this is different from most people's intuition. A separate survey she conducted showed that most participants preferred having people reveal their estimates before the discussion.)

Can we apply this research to development of estimates about uncertain facts used in government and other large organizations? Minson's research involved situations where participants were often very uncertain of the facts, but where there were indeed correct facts that existed even if participants didn't know them (e.g. one question respondents answered was to estimate the annual salaries of nine Fortune 500 CEO's). Does this apply to uncertainty about estimates of the future, e.g. Ukraine or oil price?

Clearly, Minson could not have done research about the latter, because her calculations required a comparison of averaging and discussion estimates with some standard of a correct answer, and such a standard is not applicable right now to estimates about the future. Nonetheless, the principle that there is what will turn out to be a correct estimate about an uncertain future, even if we don't know it now, is the same for the two situations, and my view is that we can make the crossover from CEO salary estimates to Ukraine ones. (It should also be noted that Minson's experiments involve pairs rather than larger groups, but she presents some evidence that her findings would be expected to apply to teams as well.)

One of the great virtues of being at a university is getting structured opportunities to be exposed to the ideas of young colleagues like Minson's. At the Kennedy School, and at most other research-oriented universities, there are regularized faculty seminars where faculty present their research to colleagues, and a large proportion of presentations are by young colleagues. Similarly, every time our faculty searches to hire young faculty, we hear a bunch of presentations by new scholars. This is tremendously stimulating to me as an older faculty member, and encourages me to develop new ideas and new ways of thinking, which is good for our organization as a whole. Although this is not the theme of this blog, I would note as a sidebar that government (or other) organizations would do well to find ways to structure opportunities for newer organization members to present their thoughts to those who have been around longer.

Posted by Steve Kelman on Apr 06, 2015 at 12:38 PM


  • Workforce
    White House rainbow light shutterstock ID : 1130423963 By zhephotography

    White House rolls out DEIA strategy

    On Tuesday, the Biden administration issued agencies a roadmap to guide their efforts to develop strategic plans for diversity, equity, inclusion and accessibility (DEIA), as required under a as required under a June executive order.

  • Defense
    software (whiteMocca/

    Why DOD is so bad at buying software

    The Defense Department wants to acquire emerging technology faster and more efficiently. But will its latest attempts to streamline its processes be enough?

Stay Connected