How I Learned to Stop Worrying and Love A.I.

The Stone

The Stone is a forum for contemporary philosophers and other thinkers on issues both timely and timeless.


Photo
Garry Kasparov on a television monitor at the start of the final match against IBM's Deep Blue computer in May 1997 in New York.Credit Stan Honda/Agence France-Presse — Getty Images

The distinction between man and machine is under siege. The technology wizard Ray Kurzweil speaks with casual confidence of achieving electromagnetic immortality with our once-human selves eternally etched onto universal servers. For me, the possibility that machines will acquire the equivalent of human feelings and emotions is pure fantasy. And yet, as a neurologist, I cannot ignore advancing machine intelligence’s implications about the human mind.

To begin, think back to IBM’s Deep Blue defeat of Garry Kasparov in 1997. One pivotal move that shifted the match in favor of Deep Blue prompted Kasparov to accuse IBM of human intervention. In so doing, he highlighted the essential cognitive dissonance that we will face as machines get smarter.

We know what it means to understand something because we experience the sensation of understanding. Machines don’t.

Kasparov couldn’t believe that he had been beaten by a computer because he felt the play was a sign of superior intelligence. But he was wrong — years later it was revealed by Deep Blue’s co-creator that the triumphant move had been a result of a software bug. When presented with several options, Deep Blue could not make a definitive decision, so made a random move that rattled Kasparov.

Uncovering the so-called biology of creativity is big business. FMRI scan aficionados tell us which brain areas light up when someone has a novel idea. Brain wave experts propose electrical patterns specific to originality. Even if these observations pan out, they cannot tell us how to interpret a brilliant chess move arising out of a software glitch. If we are forced to expand our notion of creativity to include random electrical firings, what does that tell us about our highly touted imaginative superiority over a mindless machine?

For Kasparov, Deep Blue was an enigmatic black box, his opinions shaped by his biases as to what constitutes human versus machine intelligence. He’s not alone. We have a strong personal sense of how humans think because we experience thoughts. We know what it means to understand something because we experience the sensation of understanding. This sense of understanding requires both consciousness and awareness of one’s thoughts. We cannot conceive of understanding without consciousness.

What is overlooked in this equation is the quality or accuracy of the actual decision. A standard move by a chess player is evidence of understanding, but a superior move by an inanimate collection of wires and transistors is considered rote machine learning, not understanding. To put this in perspective, imagine a self-proclaimed chess novice making the same pivotal move as Deep Blue. I doubt that any of us would believe that she didn’t know anything about chess.

Yet neuroscience is revealing that understanding isn’t a result of conscious deliberation. From the hunch to the “aha,” various degrees of the feeling of knowing are involuntary mental sensations that arise from subliminal brain mechanisms. They are the brain’s way of telling us the likelihood that a subliminal thought is correct. More akin to bodily sensations than emotions, they can occur spontaneously, with certain psychoactive drugs and with direct brain stimulation in the absence of any conscious thought. We can’t think up an aha — the ultimate sense of understanding. It just happens to us in the same way that we experience love and surprise.

Conversely, we can know things without any sense of knowing (as in the classic example of blindsight, where patients with cortical blindness can point out in which visual field a light is flashing even when they consciously see nothing and are entirely unaware of this knowledge).

If we accept that the feeling of understanding is an involuntary sensation that machines don’t experience, you would think that we would stop worrying about what machines “understand.” Not so. In 1980 the philosopher John Searle introduced the Chinese Room argument to show that it is impossible for digital computers to understand language or think. His 1999 summary of the argument goes as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Back in 1980, when we knew little about the brain and artificial intelligence hadn’t yet flexed much practical muscle, the argument felt reasonable; absent the ability to understand, you wouldn’t expect A.I. to make sophisticated decisions that are the equivalent of smart human thought. Thirty-five years later, though, the argument seems outdated. At bottom, it is a convoluted way of saying that machines don’t have consciousness and feelings. Denying machine understanding tells us nothing about the potential limits of machine intelligence. Even so, according to the Stanford Encyclopedia of Philosophy, the Chinese Room argument remains the most widely discussed philosophical argument in cognitive science since the Turing Test.

It’s as though our self-proclaimed position of superiority and uniqueness is constantly threatened, and we seem constitutionally compelled to compare ourselves to all other potentially thinking entities. A few hundred years ago, Descartes assumed that animals were automatons. Now we know that crows use tools and chimpanzees wage territorial war. Still, we aren’t worried about crow and chimpanzee takeover of our planet, or that they are going to replace us as the highest life form on earth. But machines, well that’s a different story.

To his credit, Kasparov saw the future. As the story goes, his close friends tried to console him for his loss by arguing that the computer had enormously greater computational power than the human brain, but did not understand chess. Kasparov’s prescient response, referring to the vast amount of information the computer had processed: “More quantity creates better quality.”

Most of us now know this to be true in our own lives. As a practicing neurologist, I took great pride in my clinical reservoir of obscure information. Now, any hand-held device with a modicum of memory has a greater and more accurate database. And it isn’t just neurology. I have had a lifelong fascination with poker, and have managed a reasonable skill set based on practice, study and a bit of math. For me, the pleasure of the game is figuring out what an opponent has, what a bet means, when he’s bluffing. In essence I have loved neurology and poker because they allow me to use my wits. No longer.

In the last several years, a poker-playing program (Cepheus) developed by the computer science department at the University of Alberta has consistently outplayed the world’s best heads up limit hold ’em players. What makes this conquest so intriguing is that the computer isn’t programmed in advance to play in any particular style or have any knowledge of the intricacies of poker theory. Instead it is an artificial neural network with a huge memory capacity (4000 terabytes). It plays and records the outcome of millions of trial and error simulations, eventually learning the optimal strategy for any given situation. It does so without any knowledge of the game or its opponent, or any of the subtleties that inform the best human players. It is completely in the dark as to why it does anything. (If you want, you can test your skills against the program at poker.srv.ualberta.ca.)

Related
More From The Stone

Read previous contributions to this series.

So if we are to accept reality, and acknowledge this sort of relative superiority in machines, how should we adapt? I like the perspective of a young friend of mine, a top flight professional hold ’em player, who has spent considerable time playing (but rarely winning) against Cepheus. He is hoping to improve his game by observing and unraveling the presumed reasons behind the computer’s often counterintuitive plays. He doesn’t care whether or not Cepheus understands anything about the game of poker. “Hey, I’m practical, not a philosopher. If it knows something that I don’t, I’m all for learning it.”

Rather than burden ourselves with biological biases as to what constitutes understanding, let me suggest adopting a new taxonomy. Let’s give machines the status of a separate species with a distinctly different type of intellect — one that is superior in data crunching but is devoid of emotional reasoning. Not better, not worse, just different. No more condescension based on animistic beliefs. No more machine worship based on one’s love of technology. Let’s avoid using words like thinking and understanding when talking about machine intelligence; they add nothing to our understanding of their understanding (see what I mean?). We are slowly learning the myriad ways that animals and plants exhibit their own forms of intelligence; the same criteria should apply to machines.

The division is straightforward. For data that can be quantified, wisdom will become collective, not personal. We will ask our smart machines to tell us which will be the best treatment for an illness, the best move for a chess match or poker game, the optimal rush hour traffic flow, the likelihood of climate change. We cannot compete at this level.

The ultimate value added of human thought will lie in our ability to contemplate the non-quantifiable. Emotions, feelings and intentions — the stuff of being human — don’t lend themselves to precise descriptions and calculations. Machines cannot and will not be able to tell us the best immigration policies, whether or not to proceed with gene therapy, or whether or not gun control is in our best interest. Computer modeling can show us how subtle biases can lead to overt racism and bigotry but cannot factor in the flood of feelings one experiences when looking at a photograph of a lynching.

Most of this seems too obvious for words. We have emotional intelligence; machines don’t. Rather than fretting over what sources of pride machines will take from us, we should focus on those areas where man alone can make a difference.

In all fairness, this essay contains a long-festering personal agenda. My real concern is that, in keeping with our growing obsession with creating and using smart machines, we are on the way to losing the cognitive skills that won’t be replaced by machines. Witness the decline in university enrollment in the humanities, the demise of the literary novel and the seeming obsession with information over contemplation. Of course nothing is black or white. Trends are in the eye of the beholder.

I confess to a bias for those minds that rely on scientific evidence and critical reasoning for those questions that can be answered empirically while simultaneously retaining a deep appreciation for the inexplicable, mysterious and emotionally complex — the indescribable yet palpable messiness that constitutes a life. For the latter, our value added isn’t in any specific answer, but in the deeply considered question. In the end, it will be the quality of the questions that will be the measure of a man.

Robert A. Burton, a former chief of neurology at the University of California, San Francisco Medical Center at Mt. Zion, is the author of “On Being Certain: Believing You Are Right Even When You’re Not,” and “A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves.”

Follow The New York Times Opinion section on Facebook and on Twitter, and sign up for the Opinion Today newsletter.