The Undoing Project: A Friendship that Changed the World(40)



Danny’s students left every class with a sense that there was really no end to the problems in this world. Danny found problems where none seemed to exist; it was as if he structured the world around him so that it might be understood chiefly as a problem. To each new class the students arrived wondering what problem he might bring for them to solve. Then one day he brought them Amos Tversky.





5



THE COLLISION

Danny and Amos had been at the University of Michigan at the same time for six months, but their paths seldom crossed; their minds, never. Danny had been in one building, studying people’s pupils, and Amos had been in another, devising mathematical approaches to similarity, measurement, and decision making. “We had not had much to do with each other,” said Danny. The dozen or so graduate students in Danny’s seminar at Hebrew University were all surprised when, in the spring of 1969, Amos turned up. Danny never had guests: The seminar was his show. Amos was about as far removed from the real-world problems in Applications of Psychology as a psychologist could be. Plus, the two men didn’t seem to mix. “It was the graduate students’ perception that Danny and Amos had some sort of rivalry,” said one of the students in the seminar. “They were clearly the stars of the department who somehow or other hadn’t gotten in sync.”

Before he left for North Carolina, Amnon Rapoport had felt that he and Amos disturbed Danny in some way that was hard to pin down. “We thought he was afraid of us or something,” said Amnon. “Suspicious of us.” For his part, Danny said he’d simply been curious about Amos Tversky. “I think I wanted a chance to know him better,” he said.

Danny invited Amos to come to his seminar to talk about whatever he wanted to talk about. He was a little surprised that Amos didn’t talk about his own work—but then Amos’s work was so abstract and theoretical that he probably decided it had no place in the seminar. Those who stopped to think about it found it odd that Amos’s work betrayed so little interest in the real world, when Amos was so intimately and endlessly engaged with that world, and how, conversely, Danny’s work was consumed by real-world problems, even as he kept other people at a distance.

Amos was now what people referred to, a bit confusingly, as a “mathematical psychologist.” Nonmathematical psychologists, like Danny, quietly viewed much of mathematical psychology as a series of pointless exercises conducted by people who were using their ability to do math as camouflage for how little of psychological interest they had to say. Mathematical psychologists, for their part, tended to view nonmathematical psychologists as simply too stupid to understand the importance of what they were saying. Amos was then at work with a team of mathematically gifted American academics on what would become a three-volume, molasses-dense, axiom-filled textbook called Foundations of Measurement—more than a thousand pages of arguments and proofs of how to measure stuff. On the one hand, it was a wildly impressive display of pure thought; on the other, the whole enterprise had a tree-fell-in-the-woods quality to it. How important could the sound it made be, if no one was able to hear it?

Instead of his own work, Amos talked to Danny’s students about the cutting-edge research being done in Ward Edwards’s lab at the University of Michigan. Edwards and his students were still engaged in what they considered to be an original line of inquiry. The specific study Amos described was about how people, in their decision making, responded to new information. As Amos told it, the psychologists had brought people in and presented them with two book bags filled with poker chips. Each bag contained both red poker chips and white poker chips. In one of the bags, 75 percent of the chips were white and 25 percent were red; in the other bag, 75 percent of the chips were red and 25 percent were white. The subject picked one of the bags at random and, without glancing inside the bag, began to pull chips out of it, one at a time. After extracting each chip, he’d give the psychologists his best guess of the odds that the bag he was holding was filled with mostly red, or mostly white, chips.

The beauty of the experiment was that there was a correct answer to the question: What is the probability that I am holding the bag of mostly red chips? It was provided by a statistical formula called Bayes’s theorem (after Thomas Bayes, who, strangely, left the formula for others to discover in his papers after his death, in 1761). Bayes’s rule allowed you to calculate the true odds, after each new chip was pulled from it, that the book bag in question was the one with majority white, or majority red, chips. Before any chips had been withdrawn, those odds were 50:50—the bag in your hands was equally likely to be either majority red or majority white. But how did the odds shift after each new chip was revealed?

That depended, in a big way, on the so-called base rate: the percentage of red versus white chips in the bag. (These percentages were presumed to be known.) If you know that one bag contains 99 percent red chips and the other, 99 percent white chips, the color of the first chip drawn from the bag tells you a lot more than if you know that each bag contains only 51 percent red or white. But how much more does it tell you? Plug the base rate into Bayes’s formula and you get an answer. In the case of two bags known to be 75 percent-25 percent majority red or white, the odds that you are holding the bag containing mostly red chips rise by three times every time you draw a red chip, and are divided by three every time you draw a white chip. If the first chip you draw is red, there is a 3:1 (or 75 percent) chance that the bag you are holding is majority red. If the second chip you draw is also red, the odds rise to 9:1, or 90 percent. If the third chip you draw is white, they fall back to 3:1. And so on.

Michael Lewis's Books