BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Will Artificial Intelligence Produce Synthetic Psychopaths?

Following
This article is more than 3 years old.

Some day soon, computers may convincingly mimic human empathy. Will this lead to ethical artificial intelligence, or synthetic psychopathy?

A Turing Test for Ethical Artificial Intelligence (AI”)

Technologists and ethicists have started to explore the form and meaning of a Turing Test for Ethical AI.

The Original Imitation Game — Can Machines Think?

20th century British mathematician and cryptologist Alan Turing proposed a test for whether machines could think, or at lease exhibit intelligence.

Initially called “The Imitation Game,” one version of the test imagines that a human evaluator communicates in writing with a human participant and a computer designed to emulate human language and responses. In Turing’s view, if the evaluator could not distinguish between the human and the computer, the computer might be said to think, or at least to possess intelligence.

The Ethical AI Imitation Game — Can Machines Empathize?

Encouraged by the power and possibility of natural-language programs and self-learning algorithms, technologists and ethicists have begun exploring whether machines might pass an Ethical Imitation Game — whether they can convincingly imitate the empathy necessary for ethical deliberation and decision making.

The Ethical And Innovative Limits of Programs

The Meaning Of Any System Lies Outside The System

It is an axiom of Systems Analysis that the meaning of a system lies outside the system. Computer programs, as systems, remain bounded by their algorithms. They can describe what they do, but they cannot explain it.

Nor can they consider why they do what they do, which is the crux of ethical and moral decision making.

Programs Cannot Innovate

Such programs also cannot innovate, where innovate means to step outside of their algorithmic boundaries.

Show a self-learning program all Picasso Blue Period paintings, and the program might render a reasonable facsimile of a Picasso Blue Period painting.

But, the program will never produce Cubism.

Psychopaths Fake Empathy They Do Not Feel

Before attempting a program that simulates empathy, think about a person who lacks it. We call such person a psychopath. He or she can fake empathy, but only to serve his or her ends.

Analogously, programs written to simulate empathy successfully will represent synthetic psychopaths, faking empathy to further the ends of their programmers. Any form of deception violates Deontological ethics.

Does such deception sound like progress?

Do We Want Our Lives Ruled By Psychopaths?

Algorithms run or guide much of our lives.

For the sake of efficiency, we have removed the human element from much of commerce. Algorithms decide what advertisements we see online. They decide in the first instance our access to credit. They determine how, when, and on what terms we can transact, often adjusting pricing in light of our past purchasing behavior.

In the physical world, they direct our movements via automated traffic systems. To an increasing degree, the algorithms that drive national-defense systems may decide peace or war.

The Uniquely Human “Why?”

We cannot escape the ever-growing sophistication and reach of computerized systems.

But, might will never make right.

Ethical AI does not represent a challenge for technologists so much as a dilemma for ethicists. Machines will do what they are told. Self-learning machines may find ways to do WHAT they are told better than their human programmers could tell them. However, such machines will never comprehend why they do what they do.

The Why must come from the programmers, or those who direct them.

This places extraordinary power in such people’s hands.

Those pursuing ethical AI might therefore want to ask themselves not What they seek, nor How they might attain it, but Why they want programs to simulate empathy.

And they should ask themselves this not just in light of logic, but in light of human experience.

#therightwaytowin

#ai

#artificial intelligence

[NB — Hat tip to Norberto de Andrade, PhD, for sharing an Infoworld article on this topic via LinkedIn. All views expressed in this column are my own.]

Follow me on Twitter or LinkedInCheck out my website