Skip to main content

AI tools will make it easy to create fake porn of just about anybody

AI tools will make it easy to create fake porn of just about anybody

Share this story

Image: Shutterstock

With nothing more than a few photographs and some free software, it’s now possible to quickly graft anyone’s face onto pornographic images, videos, and GIFs. The results are far from convincing, but they’ll get better fast — becoming another example of how machine learning will make the maxim “seeing is believing” laughably out-of-date.

As first reported by Motherboard, a Reddit user named “deepfakes” has been applying these tools to create content for the pornographic subreddit r/celebfakes. Using a combination of open-source AI software — including Google’s TensorFlow — deepfakes pasted the faces of celebrities like Scarlett Johansson, Taylor Swift, and Maisie Williams onto X-rated GIFS. He even created a full video with the face of Wonder Woman star Gal Gadot inserted into an incest-themed porn scene. (At the time of writing, this content has been removed from its original web host, though it’s not clear who removed it.)

Speaking to The Verge, deepfakes said he didn’t want to reveal any information that could compromise his anonymity, but described the technology as “not very interesting.” When asked why he made the content, he said: “You could say that my motivation is obsession, with porn or imaginary internet points or problem solving.” He also told Motherboard that he wasn’t a professional researcher, but just a programmer curious about machine learning. “I just found a clever way to do face-swap,” he told the publication. “Every technology can be used with bad motivations, and it's impossible to stop that.” Deepfakes did not reply to further questions from The Verge.

A GIF from the incest-themed video with Gal Gadot’s face.
A GIF from the incest-themed video with Gal Gadot’s face.
Image: Motherboard

It’s certainly true that the technology being used is simple. Snapchat has been doing face-swapping for years, and computer engineers have long used machine learning to mash visuals together. Similar techniques to the ones used by deepfake have created fake celeb faces, designed new clothes, and even turned line drawings into nightmarish cats. It’s even powered apps, including Prisma (which uses a technique called “style transfer” to make your selfies look like paintings) and FaceApp (which morphs your face into an older or younger version of yourself).

It’s important to note that digital editing of this kind has, , of course, long been possible with software like Photoshop. But AI helps automate the process, making it quicker and more accessible. If you want to make a fake video using Photoshop, you have to edit every individual frame. Machine learning does much of this hard work for you — if you have a little bit of technical knowledge.

We’ve been editing images for decades, but AI makes it easier

The danger if these tools become widespread is self-evident. They could be used to create supercharged fake news, or harassment and bullying. Imagine the situation in a school if someone creates fake porn of a classmate. Getting source pictures would be easy enough, and even if the content generated looks fake, it would be excruciating for the target. In an age where misogynists use online platforms to wage persistent and often damaging hate campaigns, the potential for harm is huge.

Speaking to The Verge, Dr. Kate Devlin, an academic specializing in AI and sexuality, suggests there may be some positives that could come out of this tech, including cutting down on exploitative sex work by generating fake porn. But these upsides are easily outweighed by the negatives, she says: “The potential for fake news and revenge porn is just so high, and this technique is so easily done by the looks of things.”

Devlin also notes that as opposed to other ethical challenges created by AI (such as problem of bias in algorithms), this sort of threat is much harder to police. These are not technologies distributed by centralized corporations — they’re open-source and can be used by anyone with the right know-how. “This is easily accessible so it’s really hard to put it back in the bottle,” she says.

“The potential for fake news and revenge porn is just so high.”

Experts in the AI community say they’re dismayed and disappointed by this use of the technology, but some see it as an opportunity to have a much-needed conversation about the future of digital imagery. Alex Champandard, a researcher who builds AI-powered creative tools, tells The Verge that there needs to be better public understanding of what technology can do. He also suggests that researchers can and should build programs that are better at spotting such fakes.

“Print propaganda is as old as the printing press, video propaganda as old as TV. Now with AI it's becoming very obvious [that] we need to learn how to deal with this!” says Champandard. “The answer is not necessarily technological, in fact, it’s probably not. It's more about identifying and building trust in relationships — including with journalists and press outlets, publishers, or any content creator for that matter.”