AI, KNOW THYSELF

Google created a tool to test for biases in AI data

Taking a closer look.
Taking a closer look.
Image: AP Photo/Mary Altaffer
We may earn a commission from links on this page.

Developing an artificial intelligence algorithm involves much more than writing the code. It’s also a matter of carefully curating data so that the algorithm learns the right things.

If the coders building AI aren’t careful, their algorithms can pick up biases about the real world, like thinking women aren’t doctors, or an inability to treat patients who don’t speak English. To help them avoid these scenarios, Google released a tool called the “What-If-Tool” this Tuesday (Sept. 11) that helps suss out the biases in data.

Google engineer James Wexler writes that checking a data set for biases typically requires writing custom code for testing each potential bias, which takes time and makes the process difficult for non-coders.

In addition to checking how diverse the dataset is, and whether changing the data describing a person’s ZIP code or race influences an algorithms’ decision, the What-If-Tool tool also helps compare the factors leading an algorithm to make one decision over another. With this part of the tool, AI programmers can see the exact boundary at which an algorithm will make one decision over another.

Wexler writes that multiple teams within Google have already used this tool to find bugs and identify underperforming aspects of their algorithms. The code is open-source, meaning anyone can download and use it, and it’s available now.