CT2001-Lessons-AI-control-envelope-301

Can we control artificial intelligence?

Jan. 27, 2020
As in process automation, the first step is to define the goals.

"The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which," said Stephen Hawking.

We control the planet not because humans are the strongest, fastest or biggest creatures—the dinosaurs were that—but because we are the smartest of God's creations. Recently, we started developing a new tool that can influence human culture, a tool that is beginning to affect our own evolution. This tool can be very helpful or can get us into deep trouble. Human, or normal intelligence, is the totality of all mental processes serving to acquire and apply knowledge. When we delegate some of that intelligence to machines, they acquire artificial intelligence (AI).

AI control envelope

Figure 1: The control envelope for developing artificial intelligence (AI) must protect from the AI doing all it can, and limit its use to doing only what it should. The envelope limits are A: Protect privacy, moral and ethical standards, security; B: Advance R&D, health, energy, transport, etc.; C: Prevent cyber terrorism, misinformation, military application; and D: Support the health, culture and intelligence of the next generations. Source: www.analyticsinsight.net

Here, I will not discuss the different applications of AI, but will focus on the process that shows how this new tool and its applications are evolving. I will apply the rules of industrial process control to see if this evolutionary process is controllable and if it is, what do we need to do to guide the development of this tool in a safe and useful direction?

If we were developing the control system for an industrial process, the first thing we would do would be to identify the goal we want to reach (its setpoint), next we would identify the variable we have to manipulate (the manipulated variable) to reach and maintain that goal, and finally we would measure the errors—the difference between the values of the present conditions (measurement) and their desired values (setpoint), which the controller has to correct. After that, we would evaluate the dynamic "personality" of the process, its gains, time constants, etc., which is information needed to tune the controller. And, we would also calculate the amount of manipulation needed, which in industry usually means the sizing the control valves.

Today, AI development is an out-of-control process that has no clearly defined goal (setpoint). Therefore, this development is proceeding without limits, letting AI do what it can, instead of limiting it only to doing what it should. In other words, AI development is progressing without having a full understanding of its capabilities and a clear definition of which of those should be exploited, and which not. Today, this tool in our "toolbox" provides practically unlimited memory capacity and immense speed of data manipulation. It can do things that humans cannot, like divide a seven-digit number with a six-digit one in a millisecond, or implement a 20-variable algorithm, yet it cannot tell right from wrong.

Therefore, if we apply the rules of process control and analyze the personality of the AI development process, we see that AI is a very good tool that has no conscience, no sense of self, no values, no emotions. It just carries out what the man-made algorithms dictate. In short, AI is just like a knife, which can be used both to slice bread or to kill. This is an important point, because if they are used unethically, it’s not the fault of AI or the knife, but of those who used them unethically.

In this respect I not only mean violence or terrorism, but also social and cultural influences. For example, we must make sure that AI will not serve the arrival of the age of George Orwell's Big Brother, where we are constantly watched and brainwashed, truth doesn’t matter and the spreading of hate or lies is considered to be free speech. Because eventually AI will have thousands of different types of applications, we must guide its development in a safe direction by setting limits on what it is allowed to do and what it is not.

Industrial control experience tells us that leaving a poorly understood process uncontrolled can lead to disaster, while it has also proved that controlling a multivariable process is like guiding a projectile. In case of guiding a missile to a target, we know where it should land (setpoint), have a control envelope inside which it should travel, and an algorithm that activates when it drifts toward one of the walls of the control tunnel. Plus, we have the means (manipulated variables) to modify its direction to keep it on track. Naturally, the guiding system is fast and is tuned to match the personality (dynamic characteristics) of this process.

It is this type of a control system that is needed to guide the AI development process. In this case, too, we first have to develop the limit setpoints for this multivariable process. The control envelope might look something like the one shown in Figure 1.

Today, the AI development process is uncontrolled, the developers have different goals and some of these goals are undesirable or even dangerous. The existing AI systems are already able to turn our nuclear power plants into atomic bombs, sink unmanned oil drilling platforms, attack the water supply or the electric grid, etc. In addition, AI is already being used for cyberspace interference to brainwash or divide nations, manipulate elections, change social attitudes, and spread hate or falsehoods. Uncontrolled AI can change the cultural environment of our children, if for six or seven hours a day they grow up clicking buttons on their phones, while exercising for only 18 minutes. In short, the AI development process must not remain uncontrolled, we must keep it within a control envelope.

The present slippery slope gets only steeper. For example, the use of AI in genetic engineering started out with harmless goals like age reversal and in-vitro fertilization, but if left uncontrolled, the natural next step can be to insert new genetic traits into human eggs or sperm by inserting DNA sequences. The Chinese are already using DNA maps of faces to aid them in racial profiling their Uyghur minorities, and there is no limit to these applications. It might start with race identification and could end up with cloning human beings or even trying to create a "perfect race." Who knows, one day you might just order Béla Bartók baby on Amazon, to be delivered in nine months. (Today, this sounds like a bad joke, but who knows?)

In other words, as Hawking and others fear, the consequences of uncontrolled AI development could spell an end to natural evolution. Therefore, uncontrolled AI development must not be allowed to continue.

For millions of years, human evolution was controlled by nature's rules. The Creator has placed nature in the driver’s seat of all evolutionary processes on this planet. Yet today, we think that we’re taking control over from nature (AI, global warming, nuclear weapons, etc.), but we’re wrong. We are not risking the planet, only our own civilization, because nature will take back control when we cross its "red line," and it will be no fun when it happens. (Yes, I mean when, not if.) It would be much better if our combined wisdom prevented the misuse of this tool, AI; stopped global warming and eliminated nuclear weapons, because if we wait for nature to close this control loop, the manipulation or throttling (as the example of the dinosaurs illustrated) will be neither smooth, nor gradual.

About the author: Béla Lipták
About the Author

Béla Lipták | Columnist and Control Consultant

Béla Lipták is an automation and safety consultant and editor of the Instrument and Automation Engineers’ Handbook (IAEH).