Revisiting Decision Making: The Heuristics Debate
Paragraph 1
“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction,” Daniel Kahneman, of the Hebrew University of Jerusalem, and his colleague Amos Tversky wrote in 1973. “They rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.” Heuristics are rules of thumb––decision-making shortcuts. Kahneman and Tversky didn’t think relying on them was always a bad idea, but they focused their work on heuristics that led people astray. Over the years they and their adherents assembled a long list of these decision-making flaws—the availability heuristic, the endowment effect, and so on. Kahneman won an economics Nobel in 2002—Tversky had died in 1996 and thus couldn’t share the prize—and the heuristics-and-biases insights relating to money became known as behavioural economics.
Paragraph 2
The implications for how to make better decisions, though, are less clear. First-generation decision analysts such as Howard Raiffa and Ward Edwards recognized the flaws described by Kahneman and Tversky as real but thought the focus on them was misplaced and led to a fatalistic view of man as a “cognitive cripple.” Even some heuristics-and-biases researchers agreed. And so, a new set of decision scholars began to examine whether those shortcuts our brains take are actually all that irrational.
Paragraph 3
That notion wasn’t entirely new. Herbert Simon had begun using the term “heuristic” in a positive sense in the 1950s. Decision makers seldom had the time or mental processing power to follow the optimization process outlined by the decision analysts, he argued, so they “satisficed” by taking shortcuts and going with the first satisfactory course of action rather than continuing to search for the best.
Paragraph 4
Simon’s “bounded rationality” is often depicted as a precursor to the work of Kahneman and Tversky, but it was different in intent. Whereas they showed how people departed from the rational model for making decisions, Simon disputed that the “rational” model was actually best. In the 1980s others began to join in the argument. The most argumentative among them was Gerd Gigerenzer, a German psychology professor who also did doctoral studies in statistics. He was, first, dubious of some of the results. By tweaking the framing of a question, it is sometimes possible to make apparent cognitive illusions go away. Gigerenzer and several coauthors found, for example, that doctors and patients are far more likely to assess disease risks correctly when statistics are presented as natural frequencies (10 out of every 1,000) rather than as percentages…
Paragraph 5
Gigerenzer is not alone, though, in arguing that we shouldn’t be too quick to dismiss the heuristics, gut feelings, snap judgments, and other methods humans use to make decisions as necessarily inferior to the probability-based verdicts of the decision analysts. Even Kahneman shares this belief to some extent... One of the stars of Malcolm Gladwell’s book Blink, Klein studies how people – firefighters, soldiers, pilots – develop expertise, and he generally sees the process as being a lot more naturalistic and impressionistic than the models of the decision analysts. He and Kahneman have together studied when going with the gut works and concluded that, in Klein’s words, “reliable intuitions need predictable situations with opportunities for learning.”
CAT Verbal Online Course