BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Trust At The Center: Building An Ethical AI Framework

IBM AI

Published: Mar 26, 2020


An analysis of annual reports filed most recently with the Securities and Exchange Commission shows a very telling trend: Twice as many companies reported one specific risk factor in 2018 than those in the previous year, according to a Wall Street Journal article.

Which risk factor saw such meteoric growth? The use of artificial intelligence (AI).

As a growing number of organizations and functions adopt AI, it must command the attention and active governance of the C-suite and board of directors. Used unethically—even inadvertently—AI can result in significant revenue loss or stiff fines stemming from faulty automated decision making, noncompliant behaviors or biased algorithms. 



And business performance is not the only risk. Unethical AI can also damage a more intangible but priceless asset: an organization’s reputation and the trust of its customers. 

Trust, ethics, governance and related issues were hot topics at the December 2019 AI Summit in New York, which brought together enterprise business leaders and AI innovators to discuss AI’s impact on business today. The bottom line: Trust is at the foundation of corporate reputation. It routinely emerges as a top attribute when brand equity is measured because any transaction between a brand and its customers is an exchange of value for currency.

Consumers conduct transactions with organizations hundreds or thousands of times a day through actions like scrolling webpages, banking online or calling customer service. These transactions seem free of charge, but they aren’t. The currency is consumer data often in the form of personally identifiable information such as a Social Security number, bank account number or email address. Or it can be much subtler but still very personal information, such as what path an individual took when scrolling through an app, what they asked their voice-controlled assistant to look up or what they wrote in the résumé they submitted. In all these cases, people trust that their information will be used ethically and without bias by organizations and the AI algorithms they employ. 

While we are in the early days of commercial AI regulation, organizations cannot sit by and wait for lawmakers to create a road map. To do that is to miss out on gains made possible by AI, like the discovery of insights that can lead to innovations that benefit business and society, intelligent automation of processes that can free up human workers to add more strategic value or the creation of new products and services that fulfill unmet needs and help organizations leapfrog their competitors. 

An organization’s board of directors and C-suite should view the ethical use of AI as an imperative—one that can’t be ignored. To do so, C-suite leaders should leverage an AI framework like the one below.


This AI framework can ensure the ethical use of AI and sustain the trust of employees and customers. It includes six steps:

1. Fair And Impartial Use Checks

AI applications must include internal and external checks to ensure equitable application across all participants. Impartial AI—ensuring that data and algorithms minimize discriminatory bias and avoid pitfalls introduced by humans during the coding process—is one of the most frequently discussed issues around AI and can help prevent unintended, unfair consequences for receivers of AI-driven decisions.

2. Implementing Transparency And Explainable AI

Organizations should prepare to make algorithms, attributes and correlations open to inspection so that participants can understand how their data is being used and how decisions are made. What makes this challenging is the growing complexity of machine learning and the popularity of deep-learning neural networks, which can behave like black boxes with no explanation of how their results were computed.

3. Responsibility And Accountability

Policies must be put in place to determine who is held responsible for when AI system outputs go wrong. This issue epitomizes the uncharted aspect of AI: Is it the responsibility of the developer, tester or product manager? Is it the machine learning engineer, who understands the inner workings? Or does ultimate responsibility go higher up the ladder to the CIO or CEO, who might have to testify before a government body?

4. Putting Proper Security In Place

AI systems must have sufficient measures in place to be safe from cybersecurity risks that may cause physical and/or digital harm to consumers. As AI systems increasingly show up in our physical worldfrom driverless cars to smart homes to medical health devicesthis issue is critical and high on most leaders’ agendas. In fact, cybersecurity vulnerability is the biggest concern among early adopters of AI. 

5. Monitoring For Reliability

AI systems must be able to learn from humans and other systems and produce consistent and reliable outputs. The ability of AI and machine learning systems to get smarter as they interact with humans is core to this technology’s promise, but this very same feature creates new levels of potential risk. Organizations must ensure their algorithms continue to produce reliable results each time new data is added, understand if additional human layers add biases and act when inconsistencies are discovered.

6. Safeguarding Privacy

Organizations should ensure that consumer privacy is respected, customer data is not leveraged beyond its intended and stated use and consumers can opt in and out of sharing their data. For businesses, protecting consumers’ right to privacy and communicating about that transparently while using that data to provide better products and services is a real balancing act. This is the area that’s likely to see more regulation in the near term, such as the California Consumer Privacy Act, which went into effect January 1. 

Organizations ready to embrace AI and thrive in the Age of With must start by putting trust at the center. They must thoroughly assess whether their organization meets the criteria for trustworthy and ethical AI; it’s a necessary step in increasing the returns and managing the risks that constitute the transformational promise of AI.