21 Lessons for the 21st Century(21)







The philosophical car


People might object that algorithms could never make important decisions for us, because important decisions usually involve an ethical dimension, and algorithms don’t understand ethics. Yet there is no reason to assume that algorithms won’t be able to outperform the average human even in ethics. Already today, as devices like smartphones and autonomous vehicles undertake decisions that used to be a human monopoly, they start to grapple with the same kind of ethical problems that have bedevilled humans for millennia.

For example, suppose two kids chasing a ball jump right in front of a self-driving car. Based on its lightning calculations, the algorithm driving the car concludes that the only way to avoid hitting the two kids is to swerve into the opposite lane, and risk colliding with an oncoming truck. The algorithm calculates that in such a case there is a 70 per cent chance that the owner of the car – who is fast asleep in the back seat – would be killed. What should the algorithm do?16

Philosophers have been arguing about such ‘trolley problems’ for millennia (they are called ‘trolley problems’ because the textbook examples in modern philosophical debates refer to a runaway trolley car racing down a railway track, rather than to a self-driving car).17 Up till now, these arguments have had embarrassingly little impact on actual behaviour, because in times of crisis humans all too often forget about their philosophical views and follow their emotions and gut instincts instead.

One of the nastiest experiments in the history of the social sciences was conducted in December 1970 on a group of students at the Princeton Theological Seminary, who were training to become ministers in the Presbyterian Church. Each student was asked to hurry to a distant lecture hall, and there give a talk on the Good Samaritan parable, which tells how a Jew travelling from Jerusalem to Jericho was robbed and beaten by criminals, who then left him to die by the side of the road. After some time a priest and a Levite passed nearby, but both ignored the man. In contrast, a Samaritan – a member of a sect much despised by the Jews – stopped when he saw the victim, took care of him, and saved his life. The moral of the parable is that people’s merit should be judged by their actual behaviour, rather than by their religious affiliaton.

The eager young seminarians rushed to the lecture hall, contemplating on the way how best to explain the moral of the Good Samaritan parable. But the experimenters planted in their path a shabbily dressed person, who was sitting slumped in a doorway with his head down and his eyes closed. As each unsuspecting seminarian was hurrying past, the ‘victim’ coughed and groaned pitifully. Most seminarians did not even stop to enquire what was wrong with the man, let alone offer any help. The emotional stress created by the need to hurry to the lecture hall trumped their moral obligation to help strangers in distress.18

Human emotions trump philosophical theories in countless other situations. This makes the ethical and philosophical history of the world a rather depressing tale of wonderful ideals and less than ideal behaviour. How many Christians actually turn the other cheek, how many Buddhists actually rise above egoistic obsessions, and how many Jews actually love their neighbours as themselves? That’s just the way natural selection has shaped Homo sapiens. Like all mammals, Homo sapiens uses emotions to quickly make life and death decisions. We have inherited our anger, our fear and our lust from millions of ancestors, all of whom passed the most rigorous quality control tests of natural selection.

Unfortunately, what was good for survival and reproduction in the African savannah a million years ago does not necessarily make for responsible behaviour on twenty-first-century motorways. Distracted, angry and anxious human drivers kill more than a million people in traffic accidents every year. We can send all our philosophers, prophets and priests to preach ethics to these drivers – but on the road, mammalian emotions and savannah instincts will still take over. Consequently, seminarians in a rush will ignore people in distress, and drivers in a crisis will run over hapless pedestrians.

This disjunction between the seminary and the road is one of the biggest practical problems in ethics. Immanuel Kant, John Stuart Mill and John Rawls can sit in some cosy university hall and discuss theoretical problems in ethics for days – but would their conclusions actually be implemented by stressed-out drivers caught in a split-second emergency? Perhaps Michael Schumacher – the Formula One champion who is sometimes hailed as the best driver in history – had the ability to think about philosophy while racing a car; but most of us aren’t Schumacher.

Computer algorithms, however, have not been shaped by natural selection, and they have neither emotions nor gut instincts. Hence in moments of crisis they could follow ethical guidelines much better than humans – provided we find a way to code ethics in precise numbers and statistics. If we teach Kant, Mill and Rawls to write code, they can carefully program the self-driving car in their cosy laboratory, and be certain that the car will follow their commandments on the highway. In effect, every car will be driven by Michael Schumacher and Immanuel Kant rolled into one.

Thus if you program a self-driving car to stop and help strangers in distress, it will do so come hell or high water (unless, of course, you insert an exception clause for infernal or high-water scenarios). Similarly, if your self-driving car is programmed to swerve to the opposite lane in order to save the two kids in its path, you can bet your life this is exactly what it will do. Which means that when designing their self-driving car, Toyota or Tesla will be transforming a theoretical problem in the philosophy of ethics into a practical problem of engineering.

Yuval Noah Harari's Books