The Undoing Project: A Friendship that Changed the World(74)



What was a person’s “utility”? (That odd, off-putting word here meant something like “the value a person assigns to money.”) Well, that depended on how much money the person had to begin with. But a pauper holding a lottery ticket with an expected value of 10,000 ducats would certainly experience greater utility from 9,000 ducats in cash.

“People will choose whatever they most want” is not all that helpful as a theory to predict human behavior. What saved “expected utility theory,” as it came to be called, from being so general as to be meaningless were its assumptions about human nature. To his assumption that people making decisions sought to maximize utility, Bernoulli added an assumption that people were “risk averse.” Amos’s textbook defined risk aversion this way: “The more money one has, the less he values each additional increment, or, equivalently, that the utility of any additional dollar diminishes with an increase in capital.” You value the second thousand dollars you get your hands on a bit less than you do the first thousand, just as you value the third thousand a bit less than the second thousand. The marginal value of the dollars you give up to buy fire insurance on your house is less than the marginal value of the dollars you lose if your house burns down—which is why even though the insurance is, strictly speaking, a stupid bet, you buy it. You place less value on the $1,000 you stand to win flipping a coin than you do on the $1,000 already in your bank account that you stand to lose—and so you reject the bet. A pauper places so much value on the first 9,000 ducats he gets his hands on that the risk of not having them overwhelms the temptation to gamble, at favorable odds, for more.

This was not to say that real people in the real world behaved as they did because they had the traits Bernoulli ascribed to them. Only that the theory seemed to describe some of what people did in the real world, with real money. It explained the desire to buy insurance. It distinctly did not explain the human desire to buy a lottery ticket, however. It effectively turned a blind eye to gambling. Odd this, as the search for a theory about how people made risky decisions had started as an attempt to make Frenchmen shrewder gamblers.

Amos’s text skipped over the long, tortured history of utility theory after Bernoulli all the way to 1944. A Hungarian Jew named John von Neumann and an Austrian anti-Semite named Oskar Morgenstern, both of whom fled Europe for America, somehow came together that year to publish what might be called the rules of rationality. A rational person making a decision between risky propositions, for instance, shouldn’t violate the von Neumann and Morgenstern transitivity axiom: If he preferred A to B and B to C, then he should prefer A to C. Anyone who preferred A to B and B to C but then turned around and preferred C to A violated expected utility theory. Among the remaining rules, maybe the most critical—given what would come—was what von Neumann and Morgenstern called the “independence axiom.” This rule said that a choice between two gambles shouldn’t be changed by the introduction of some irrelevant alternative. For example: You walk into a deli to get a sandwich and the man behind the counter says he has only roast beef and turkey. You choose turkey. As he makes your sandwich he looks up and says, “Oh, yeah, I forgot I have ham.” And you say, “Oh, then I’ll take the roast beef.” Von Neumann and Morgenstern’s axiom said, in effect, that you can’t be considered rational if you switch from turkey to roast beef just because they found some ham in the back.

And, really, who would switch? Like the other rules of rationality, the independence axiom seemed reasonable, and not obviously contradicted by the way human beings generally behaved.

Expected utility theory was just a theory. It didn’t pretend to be able to explain or predict everything people did when they faced some risky decision. Danny gleaned its importance not from reading Amos’s description of it in the undergraduate textbook but only from the way Amos spoke of it. “This was a sacred thing for Amos,” said Danny. Although the theory made no great claim to psychological truth, the textbook Amos had coauthored made it clear that it had been accepted as psychologically true. Pretty much everyone interested in such things, a group that included the entire economics profession, seemed to take it as a fair description of how ordinary people faced with risky alternatives actually went about making choices. That leap of faith had at least one obvious implication for the sort of advice economists gave to political leaders: It tilted everything in the direction of giving people the freedom to choose and leaving markets alone. After all, if people could be counted on to be basically rational, markets could, too.

Amos had clearly wondered about that, even as a Michigan graduate student. Amos had always had an almost jungle instinct for the vulnerability of other people’s ideas. He of course knew that people made decisions that the theory would not have predicted. Amos himself had explored how people could be—as the theory assumed they were not—reliably “intransitive.” As a graduate student in Michigan, he had induced both Harvard undergraduates and convicted murderers in Michigan prisons, over and over again, to choose gamble A over gamble B, then choose gamble B over gamble C—and then turn around and choose C instead of A. That violated a rule of expected utility theory. And yet Amos had never followed his doubts very far. He saw that people sometimes made mistakes; he did not see anything systematically irrational in the way they made decisions. He hadn’t figured out how to bring deep insights about human nature into the mathematical study of human decision making.

Michael Lewis's Books