Criminals beware! Google's AI can now identify faces from heavily pixellated images

Two Google Brain neural networks were combined to create the enhanced images

Two artificial intelligence systems built by Google are able to transform a heavily pixellated, low quality, image into a clear photo or a person or object.

Computer scientists from Google Brain, the central Google AI team, have shown it's not only possible to enhance a picture's resolution, they can fill in missing details in the process. In a paper – Pixel Recursive Super Resolution – three researchers from the Silicon Valley firm trained their system on small 8x8 pixel images of celebrity faces and photos of bedrooms.

Read more: How Google's AI taught itself to create its own encryption

The combination of a conditioning neural network and a prior neural network analysed the images to produce higher resolution 32x32 pixel versions. The process sees a blurry, almost unrecognisable, picture turned into something that clearly represents a human or a room.

In particular, the AI system works by taking a two-pronged approach. The conditioning network takes the low-resolution image and compares it to high-resolution images to determine whether a face or a room is in the image. It's possible to compare the low res image, the researchers explain, to a high image by scaling down the large image to the same 8x8 pixel size.

"When some details do not exist in the source image, the challenge lies not only in ‘deblurring’ an image, but also in generating new image details that appear plausible to a human observer," the Google Brain researchers wrote in their paper.

When both images are the same size it is relatively simple for the AI to identify similar pixels and shapes between the different versions. For example, the system can recognise an ear of a particular shape and compare it with the pixels in another image – telling the AI it is looking at a face.

Once the first AI network has completed its role, the Google researchers use the PixelCNN to add extra pixels to the 8x8 image. As Ars Technica explained, the PixelCNN adds detail by using what it knows about certain types of images. Lips are likely to be a shade of pink, so pink pixels are added to areas identified as such.

Read more: Google's AI just created its own universal 'language'

At the end of each neural network's process, the Google researchers combined the results to create a final image. They describe the process of adding details to how an artist works. "By incorporating the prior knowledge of the faces and their typical variations, an artist is able to paint believable details," they wrote.

To prove the AI-generated images are believable, the researchers tested their system on human volunteers. A group of participants was shown a true, low res image, alongside the one created by the AI. They were then asked to guess which was from a camera.

When looking at the images of celebrities, 10 per cent of the time the humans believed the artificially-created shot was taken by a camera. "Note that 50 per cent would say that an algorithm perfectly confused the subjects," the researchers say.

In the future, with further development, similar systems could be developed to add detail to pictures and video that are low resolution. One current category that lends itself to this is blurry CCTVimages. Although, the method is yet to be tested with any such databases of images and the AI creations are currently the machine's "best guesses" rather than entirely accurate portrayals.

This article was originally published by WIRED UK