BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How AI Researchers Are Tackling Transparent AI: Interview With Steve Eglash, Stanford University

Following
This article is more than 3 years old.

One of the often cited challenges with AI is the inability to get well-understood explanations of how the AI systems are making decisions. While this might not be a challenge for machine learning applications such as product recommendations or personalization scenarios, any use of AI in critical applications where decisions need to be understood face transparency and explainability issues.

On an episode of the AI Today podcast, Steve Eglash, Director of Strategic Research Initiatives in the Computer Science Department at Stanford University shared insights and research into the evolution of transparent and responsible AI. Professor Eglash is a staff member in the Computer Science Department where he works with a small group to run research programs that work with companies outside of the university. This small group helps companies share views and technology with students to share technology with companies. Before working at Stanford, Steve was an electrical engineer. In this position he was between technology and science. He has also previously worked in investments, government, research, before finally moving into academia.

As AI is being used in just about every industry and in governments at all levels, the opportunities for diving deeper into AI use cases provides Stanford students lots of opportunities to explore new areas of interest. Sufficiently understanding how AI works is crucial because we are increasingly relying on it for a range of applications. AI is being put into mission critical roles, such as autonomous vehicles. In these scenarios, a mistake can be fatal or cause serious harm or injury. As such, diving deeper into transparent and explainable AI systems can serve to make those systems more trustworthy and reliable. Ensuring safety in AI technology such as automated vehicles is crucial. As such, we need to be able to understand how and why a computer makes the decisions it does. At the same time, we want the ability to analyze the decisions of a computer after an incident.

Many modern AI systems run on neural networks which we only understand the basics of, since the algorithms themselves provide little in the way of explanations. This lack of explainability is often referred to as a “black box” for AI systems. Some researchers are turning their focus on the details of how neural networks work. Because of the size of neural networks, it can be hard to check them for errors as each connection between neurons and their weights adds levels of complexity that makes examination of decisions after-the-fact very difficult. 

Reluplex - An Approach to Transparent AI

Verification is the process of proving the properties of neural networks. Reluplex is a program that was designed by a number of people to test large neural networks. The technology behind Reluplex allows it to quickly operate across large neural networks. Reluplex was used to test an airborne collision detection and avoidance system for autonomous drones. When it was used, the program was able to prove that some parts of the network worked as it should. However, it was also able to find an error with the network that was able to be fixed in the next implementation.

Interpretability is another area of focus when it comes to this” black box” idea. If you have a large model, is it possible to understand how a model makes predictions? Steve uses the example of an image identification system trying to understand the picture of a dog on the beach. There are two ways that it could identify the dog. The AI could take the pixels that make up the dog and associate it with a dog. Or, it could take the pixels of the beach and the sky around the dog to create an understanding that the dog was there. Without an understanding of how the system is coming to those decisions, you don’t know what exactly the network is actually being trained on.

If an AI uses the first method to understand a dog is present, it is thinking in a rational way that might simulate how our own brains work. The alternate method, however, can be seen as a weak association because it isn’t relying on the actual part of the picture that contains the dog. To confirm that an AI is processing images properly, we need to know how exactly it does that. Therefore, a good portion of research is going into this task and related tasks.

Exploring Data Bias

Data bias of AI systems is also a focus at Stanford. AI systems have been found to have a fair amount of bias based on the data being used to train the machine learning models. The data that is used by the AI to make decisions can often lead to bias because the computer does not have the information it needs to make an unbiased analysis. Besides the issue of biased data, the systems themselves can be biased in decision making by factoring in only specific groups. When you train a machine learning model to lean towards larger groups in data it is then likely to be biased towards those larger groups.

We need to try to remove bias from AI systems as it increases interactions with humans. AI is now making some decisions such as insurance qualification, the likelihood of a person reoffending, and other potentially life-changing decision making. The decisions that AI makes have real world consequences and we don’t want computers to perpetuate inequality and injustice.

To remove bias from AI, data scientists need to analyze the AI and make decisions based on societal bias. To this point, Professor Percy Liang is working with his students to create distributionally robust optimization which is aiming to move away from demographics and toward the power of the machine to focus on all groups. Other researchers are working to focus on fairness and equality in artificial intelligence.

Since AI systems have not yet proven their explainability and complete trustworthiness, Steve thinks AI will be mostly used in an augmented and assisted manner, rather than fully autonomous. By keeping the human in the loop, we get a better chance to keep an eye when the system is making questionable decisions and exert more control on the final outcome of AI-assisted actions.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here