Skip to main content

Google’s AI thinks this turtle looks like a gun, which is a problem

Google’s AI thinks this turtle looks like a gun, which is a problem

/

New research shows how machine vision systems of all kinds can be tricked into misidentifying 3D objects

Share this story

Image: Labsix

From self-driving cars to smart surveillance cams, society is slowly learning to trust AI over human eyes. But although our new machine vision systems are tireless and ever-vigilant, they’re far from infallible. Just look at the toy turtle above. It looks like a turtle, right? Well, not to a neural network trained by Google to identify everyday objects. To Google’s AI it looks exactly like a rifle.

This 3D-printed turtle is an example of what’s known as an “adversarial image.” In the AI world, these are pictures engineered to trick machine vision software, incorporating special patterns that make AI systems flip out. Think of them as optical illusions for computers. You can make adversarial glasses that trick facial recognition systems into thinking you’re someone else, or can apply an adversarial pattern to a picture as a layer of near-invisible static. Humans won’t spot the difference, but to an AI it means that panda has suddenly turned into a pickup truck.

Imagine tricking a self-driving car into seeing stop signs everywhere

Researching ways of generating and guarding against these sorts of adversarial attacks is an active field of research. And although the attacks are usually strikingly effective, they’re often not too robust. This means that if you rotate an adversarial image or zoom in on it a a little, the computer will see past the pattern and identify it correctly. Why this 3D-printed turtle is significant, though, is because it shows how these adversarial attacks work in the 3D world, fooling a computer when viewed from multiple angles.

“In concrete terms, this means it's likely possible that one could construct a yard sale sign which to human drivers appears entirely ordinary, but might appear to a self-driving car as a pedestrian which suddenly appears next to the street,” write labsix, the team of students from MIT who published the research. “Adversarial examples are a practical concern that people must consider as neural networks become increasingly prevalent (and dangerous).”

Labsix calls their new method “Expectation Over Transformation” and you can read their full paper on it here. As well as creating a turtle that looks like a rifle, they also made a baseball that gets confused for an espresso and numerous non-3D-printed tests. The classes they chose were at random.

The group tested their attack against an image classifier developed by Google called Inception-v3. The company makes this freely available for researchers to tinker with, and although it’s not a commercial system, it’s not far from one. Although this attack was not tested against other machine vision software, to date there’s no single fix for adversarial images. When contacted by The Verge, Google did not offer a comment on the paper, but a spokesperson directed us to a number of recent papers outlining ways to foil adversarial attacks that have been published by the company’s researchers.

Image: Labsix
An example from labsix of how fragile adversarial attacks often are. The image on the left has been altered so that it’s identified as guacamole. Tilting it slightly means it’s identified once more as a cat.

The research comes with some caveats too. Firstly, the team’s claim that their attack works from “every angle” isn’t quite right. Their own video demos show that it works from most, but not all angles. Secondly, labsix needed access to Google’s vision algorithm in order to identify its weaknesses and fool it. This is a significant barrier for anyone who would try and use these methods against commercial vision systems deployed by, say, self-driving car companies. However, other adversarial attacks have been shown to work against AI sight-unseen, and, according to Quartz, the labsix team is working on this problem next.

Adversarial attacks like these aren’t, at present, a big danger to the public. They’re effective, yes, but in limited circumstances. And although machine vision is being deployed more commonly in the real world, we’re not yet so dependent on it that a bad actor with a 3D-printer could cause havoc. The problem is that issues like this exemplify how fragile some AI systems can be. And if we don’t fix these problems now, they could lead to much bigger issues in the future.

Read more: MAGIC AI: THESE ARE THE OPTICAL ILLUSIONS THAT TRICK, FOOL, AND FLUMMOX COMPUTERS

Update November 2nd, 1:20PM ET: The story has been updated with Google’s response.