BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Google, Facebook And Microsoft Are Working On AI Ethics—Here’s What Your Company Should Be Doing

Following
This article is more than 2 years old.

(This article is part of a series on Artificial Intelligence for Board Members and Senior Executives.)

As AI is making its way into more companies, the board and senior executives need to mitigate the risk of their AI-based systems. One area of risk includes the reputational, regulatory and legal risks of AI-led ethical decisions.

AI-based systems are often faced with making decisions that were not built into their models—decisions representing ethical dilemmas.

For example, suppose a company builds an AI-based system to optimize the number of advertisements we see. In that case, the AI may encourage incendiary content that causes users to get angry and comment and post their own opinions. If this works, users spend more time on the site and see more ads. The AI has done its job without ethical oversight. The unintended consequence is the polarization of users.

Examples of Artificial Intelligence Ethics Issues

What happens if your company builds a system that automates work so that you no longer need that employee? What is the company's ethical responsibility to that employee, to society? Who is determining the ethics of the impact related to employment?

What if the AI tells a loan officer to recommend against providing a loan to a person? If the human doesn't understand how the AI came to that conclusion, how can the human know if the decision was ethical or not? (see How AI Can Go Terribly Wrong: 5 Biases That Create Failure.)

Suppose the data used to train your AI system doesn't have sufficient data about specific classes of individuals. In that case, it may not learn what to do when it encounters those individuals. Would a facial recognition system used for check-in to a hotel recognize a person with freckles? If the system stops working and makes check-in harder for a person with freckles, what should the company do? How does the company address this ethical dilemma? (see Why Are Technology Companies Quitting Facial Recognition?)

If the developers who identify the data to be used for training an AI system aren't looking for bias, how can they prevent an ethical dilemma? For example, suppose a company has historically hired more men than women. In that case, a bias is likely to exist in the resume data. Men tend to use different words than women in their resumes. If the data is sourced from men's resumes, then women's resumes may be viewed less favorably, just based on word choice.

Ethical Principles for Companies

Google, Facebook, and Microsoft are addressing these ethical issues. Many have pointed to the missteps Google and Facebook have made in attempting to address AI ethical issues. Let's look at some of the positive elements of what they and Microsoft are doing to address AI ethics.

While each company is addressing these principles differently, we can learn a lot by examining their commonalities. Here are some fundamental principles they address.

  • Fairness: AI systems should treat all people fairly and avoid creating or reinforcing unfair bias
  • Inclusiveness: AI systems should empower everyone and engage people, and include cultural diversity
  • Reliability and Safety: AI systems should perform reliably and safely to avoid unintended results
  • Transparency: AI systems should be understandable and explainable
  • Privacy and Security: AI systems should be secure and respect privacy and provide privacy safeguards
  • Accountability: AI systems should have algorithmic accountability to enable appropriate human direction and control

While these tech giants are imperfect, they are leading the way in addressing ethical AI challenges. What are your board and senior management team doing to address these issues?

Suggestions

Below are 4 AI ethics recommendations you can implement now:

  • Make the ethics of AI a board-level discussion
  • Examine the many different risks associated with the ethics of AI
  • Ensure senior executives understand how and where AI-based systems are being considered and implemented
  • Develop an approach to building systems that considers ethical dilemmas before the system is built

By addressing these issues now, your company will reduce the risks of having AI make or recommend decisions that imperil the company. (see AI Can Be Dangerous - How to Reduce Risk When Using AI) Are you aware of the reputational, regulatory, and legal risks associated with the ethics of your AI?

If you care about how AI is determining the winners and losers in business, and how you can leverage AI for the benefit of your organization, I encourage you to stay tuned. I write (almost) exclusively about how senior executives, board members, and other business leaders can use AI effectively. You can read past articles and be notified of new ones by clicking the “follow” button here.

Follow me on LinkedInCheck out my website