The Undoing Project: A Friendship that Changed the World(53)



Amos and Danny left unaddressed the question of how exactly people formed mental models in the first place, and how they made judgments of similarity. Instead, they said, let’s focus on cases where the mental model that people have in their heads is fairly obvious. The more similar the specific case is to the notion in your head, the more likely you are to believe that the case belongs to the larger group. “Our thesis,” they wrote, “is that, in many situations, an event A is judged to be more probable than an event B whenever A appears more representative than B.” The more the basketball player resembles your mental model of an NBA player, the more likely you will think him to be an NBA player.

They had a hunch that people, when they formed judgments, weren’t just making random mistakes—that they were doing something systematically wrong. The weird questions they put to Israeli and American students were designed to tease out the pattern in human error. The problem was subtle. The rule of thumb they had called representativeness wasn’t always wrong. If the mind’s approach to uncertainty was occasionally misleading, it was because it was often so useful. Much of the time, the person who can become a good NBA player matches up pretty well with the mental model of “good NBA player.” But sometimes a person does not—and in the systematic errors they led people to make, you could glimpse the nature of these rules of thumb.

For instance, in families with six children, the birth order B G B B B B was about as likely as G B G B B G. But Israeli kids—like pretty much everyone else on the planet, it would emerge—naturally seemed to believe that G B G B B G was a more likely birth sequence. Why? “The sequence with five boys and one girl fails to reflect the proportion of boys and girls in the population,” they explained. It was less representative. What is more, if you asked the same Israeli kids to choose the more likely birth order in families with six children—B B B G G G or G B B G B G—they overwhelmingly opted for the latter. But the two birth orders are equally likely. So why did people almost universally believe that one was far more likely than the other? Because, said Danny and Amos, people thought of birth order as a random process, and the second sequence looks more “random” than the first.

The natural next question: When does our rule-of-thumb approach to calculating the odds lead to serious miscalculation? One answer was: Whenever people are asked to evaluate anything with a random component to it. It wasn’t enough that the uncertain event being judged resembled the parent population, wrote Danny and Amos. “The event should also reflect the properties of the uncertain process by which it is generated.” That is, if a process is random, its outcome should appear random. They didn’t explain how people’s mental model of “randomness” was formed in the first place. Instead they said, Let’s look at judgments that involve randomness, because we psychologists can all pretty much agree on people’s mental model of it.

Londoners in the Second World War thought that German bombs were targeted, because some parts of the city were hit repeatedly while others were not hit at all. (Statisticians later showed that the distribution was exactly what you would expect from random bombing.) People find it a remarkable coincidence when two students in the same classroom share a birthday, when in fact there is a better than even chance, in any group of twenty-three people, that two of its members will have been born on the same day. We have a kind of stereotype of “randomness” that differs from true randomness. Our stereotype of randomness lacks the clusters and patterns that occur in true random sequences. If you pass out twenty marbles randomly to five boys, they are actually more likely to each receive four marbles (column II), than they are to receive the combination in column I, and yet American college students insisted that the unequal distribution in column I was more likely than the equal one in column II. Why? Because column II “appears too lawful to be the result of a random process. . . . ”

A suggestion arose from Danny and Amos’s paper: If our minds can be misled by our false stereotype of something as measurable as randomness, how much might they be misled by other, vaguer stereotypes?

The average heights of adult males and females in the U.S. are, respectively, 5 ft. 10 in. and 5 ft. 4 in. Both distributions are approximately normal with a standard deviation of about 2.5 in.§

An investigator has selected one population by chance and has drawn from it a random sample.

What do you think the odds are that he has selected the male population if

1. The sample consists of a single person whose height is 5 ft. 10 in.?

2. The sample consists of 6 persons whose average height is 5 ft. 8 in.?

The odds most commonly assigned by their subjects were, in the first case, 8:1 in favor and, in the second case, 2.5:1 in favor. The correct odds were 16:1 in favor in the first case, and 29:1 in favor in the second case. The sample of six people gave you a lot more information than the sample of one person. And yet people believed, incorrectly, that if they picked a single person who was five foot ten, they were more likely to have picked from the population of men than had they picked six people with an average height of five foot eight. People didn’t just miscalculate the true odds of a situation: They treated the less likely proposition as if it were the more likely one. And they did this, Amos and Danny surmised, because they saw “5 ft. 10 in.” and thought: That’s the typical guy! The stereotype of the man blinded them to the likelihood that they were in the presence of a tall woman.

Michael Lewis's Books