21 Lessons for the 21st Century(18)



Nevertheless, though liberalism is wrong to think that our feelings reflect a free will, up until today relying on feelings still made good practical sense. For although there was nothing magical or free about our feelings, they were the best method in the universe for deciding what to study, who to marry, and which party to vote for. And no outside system could hope to understand my feelings better than me. Even if the Spanish Inquisition or the Soviet KGB spied on me every minute of every day, they lacked the biological knowledge and the computing power necessary to hack the biochemical processes shaping my desires and choices. For all practical purposes, it was reasonable to argue that I have free will, because my will was shaped mainly by the interplay of inner forces, which nobody outside could see. I could enjoy the illusion that I control my secret inner arena, while outsiders could never really understand what is happening inside me and how I make decisions.

Accordingly, liberalism was correct in counselling people to follow their heart rather than the dictates of some priest or party apparatchik. However, soon computer algorithms could give you better counsel than human feelings. As the Spanish Inquisition and the KGB give way to Google and Baidu, ‘free will’ will likely be exposed as a myth, and liberalism might lose its practical advantages.

For we are now at the confluence of two immense revolutions. On the one hand biologists are deciphering the mysteries of the human body, and in particular, of the brain and of human feelings. At the same time computer scientists are giving us unprecedented data-processing power. When the biotech revolution merges with the infotech revolution, it will produce Big Data algorithms that can monitor and understand my feelings much better than I can, and then authority will probably shift from humans to computers. My illusion of free will is likely to disintegrate as I daily encounter institutions, corporations and government agencies that understand and manipulate what was hitherto my inaccessible inner realm.

This is already happening in the field of medicine. The most important medical decisions in our life rely not on our feelings of illness or wellness, or even on the informed predictions of our doctor – but on the calculations of computers which understand our bodies much better than we do. Within a few decades, Big Data algorithms informed by a constant stream of biometric data could monitor our health 24/7. They could detect the very beginning of influenza, cancer or Alzheimer’s disease, long before we feel anything is wrong with us. They could then recommend appropriate treatments, diets and daily regimens, custom-built for our unique physique, DNA and personality.

People will enjoy the best healthcare in history, but for precisely this reason they will probably be sick all the time. There is always something wrong somewhere in the body. There is always something that can be improved. In the past, you felt perfectly healthy as long as you didn’t sense pain or you didn’t suffer from an apparent disability such as limping. But by 2050, thanks to biometric sensors and Big Data algorithms, diseases may be diagnosed and treated long before they lead to pain or disability. As a result, you will always find yourself suffering from some ‘medical condition’ and following this or that algorithmic recommendation. If you refuse, perhaps your medical insurance would become invalid, or your boss would fire you – why should they pay the price of your obstinacy?

It is one thing to continue smoking despite general statistics that connect smoking with lung cancer. It is a very different thing to continue smoking despite a concrete warning from a biometric sensor that has just detected seventeen cancerous cells in your upper left lung. And if you are willing to defy the sensor, what will you do when the sensor forwards the warning to your insurance agency, your manager, and your mother?

Who will have the time and energy to deal with all these illnesses? In all likelihood, we could just instruct our health algorithm to deal with most of these problems as it sees fit. At most, it will send periodic updates to our smartphones, telling us that ‘seventeen cancerous cells were detected and destroyed’. Hypochondriacs might dutifully read these updates, but most of us will ignore them just as we ignore those annoying anti-virus notices on our computers.





The drama of decision-making


What is already beginning to happen in medicine is likely to occur in more and more fields. The key invention is the biometric sensor, which people can wear on or inside their bodies, and which converts biological processes into electronic information that computers can store and analyse. Given enough biometric data and enough computing power, external data-processing systems can hack all your desires, decisions and opinions. They can know exactly who you are.

Most people don’t know themselves very well. When I was twenty-one, I finally realised that I was gay, after several years of living in denial. That’s hardly exceptional. Many gay men spend their entire teenage years unsure about their sexuality. Now imagine the situation in 2050, when an algorithm can tell any teenager exactly where he is on the gay/straight spectrum (and even how malleable that position is). Perhaps the algorithm shows you pictures or videos of attractive men and women, tracks your eye movements, blood pressure and brain activity, and within five minutes ejects a number on the Kinsey scale.6 It could have saved me years of frustration. Perhaps you personally wouldn’t want to take such a test, but then maybe you find yourself with a group of friends at Michelle’s boring birthday party, and somebody suggests you all take turns checking yourself on this cool new algorithm (with everybody standing around to watch the results – and comment on them). Would you just walk away?

Yuval Noah Harari's Books