Fake news may be a top concern for media, politics and society in general, but they’re only the first stage in another worrisome development. Indeed, when the reality of fake news exploded into public view with the 2016 US presidential elections, the concept of invented information created and disseminated to purposefully confuse, manipulate and/or polarize public opinion, immediately became a matter of general outrage; yet, not enough people are aware of the dangers presented by the next gen of fake news: deepfakes.

As the name doesn’t indicate, a deepfake (a portemanteau term for “deep learning” and “fake”) is an AI-generated video combining and superimposing visuals of a person’s face (the “faceset”) onto the video of another person’s body (the “donor body”). This human image synthesis technology uses machine learning to show a human saying and doing things that he/she never actually said or did.

From porn to mainstream

Deepfakes are fairly new. The first one emerged in the autumn of 2017 on Reddit, a popular online forum where the best and the worst of the Internet meet. A user with the pseudonym “deepfakes” posted porn videos and GIFs on the site – so far, nothing out of the ordinary for Reddit or 4chan. What was not ordinary, however, was that the videos showed, with varying levels of success, celebrities like actresses Gal Gadot, Maisie Williams, Emma Watson and Scarlett Johansson, engaging in acts that morality usually frowns upon.

The poor-quality videos were debunked in short order, but it quickly turned out that they had been created by using artificial intelligence programs. The community of underground AI enthusiasts went on tweaking the technology to get ever better results and in just a few months, the results became increasingly believable: what initially looked like a pixelated, barely recognizable image of a celebrity soon was a credible representation of that person, to the point that differentiating real videos from fake ones became virtually impossible to the uninformed eye. 

In December 2017, Vice.com was the first mainstream news platform to discuss the growing threat; but the real splash happened in April 2018, when Get Out director Jordan Peel impersonated then-US president Barack Obama in a video published on Buzzfeed.com. Peel was hoping that his video – in which he appears and explains the trick – would serve as a public service announcement and a call to confront the danger of deepfakes. Media around the world seized on the issue and Scarlett Johansson – one of porn deepfakes creators’ favorite victims – told The Washington Post that she was concerned about the phenomenon, railing about the Internet, this “vast wormhole of darkness that eats itself.” 

To no avail. Up until now, the issue continues to be mostly unaddressed, even though it openly moved to the political sphere, with more or less success: a Youtuber mashed up actor Bruno Ganz playing Adolf Hitler and Argentina’s president Mauricio Macri giving a speech (less); Catalonia’s parliament member Inés Arrimadas was credibly superimposed on a pornographic video (more); Germany’s Angela Merkel slowly transformed into Donald Trump (less); a Belgian political party circulated on Twitter and Facebook a doctored video of Trump telling Belgium to withdraw from the Paris Climate Accord (less). Trump, along with actor Nicolas Cage, is another favorite of deepfake makers who seem to consider the two men ideal practice objects; last January, an affiliate of Fox News even broadcasted in real time a doctored video of the American president while he was giving a live Oval Office address (the video editor behind the stunt was quickly fired).

Source: PotatoKaboom Youtube video. The producer created the video using a “tensor flow machine earning to swap the faces.”

You put yourself out there

So, how are deepfakes created? Relatively simply, we’re sad to say. 

Remember the Reddit “deepfakes” user? This unsung genius developed a face-swap algorithm that accesses material readily available online (on Youtube, Google image search, etc.) as its base dataset of a person’s image, which it then uses to train itself to rearrange and create a composite that person’s face, speech and mannerisms. “Deepfakes” told Vice.com in 2017 that “he’s not a professional researcher, just a programmer with an interest in machine learning. ‘I just found a clever way to do face-swap’”.

Sound is more of an issue. So far, most deepfakes do not fake the audio of the person featured and when they do, the results are sketchy. But no need to worry: many are already working on voice-cloning algorithms, including Canadian startup Lyrebird (that handled the audio part of the Obama/Peel video) and software behemoth Adobe that’s currently developing VoCo, a “Photoshop for audio” that will mimic a person’s speech. 

The equipment and tools required are accessible and relatively easy to use. Any consumer-grade PC equipped with a decent graphic card is enough and the machine learning tools like Google’s TensorFlow are available in open-source online, originally for study and research purposes. In January 2018, FakeApp, a desktop app allowing users to create and share videos with faces swapped by using an artificial neural network, was launched; other open-source programs, like DeepFaceLab, FaceSwap and myFakeApp, are also easy to find. 

As for the original source of visuals, basically every single one of us has freely provided deepfakes creators with hundreds – or more: according to a 2017 study by Now Sourcing and Frames Direct, the average millennial will take 25,000 selfies in their life – of images and videos of ourselves, courtesy of Facebook, Instagram, Snapchat, Twitter and the likes. 

In other words, not only celebrities can fall prey to deepfakes; so can you, your neighbor, your co-worker or your mom – without even being aware that you did.

Disquieting ramifications

The obvious question that comes to mind is: how good, qualitatively, and ubiquitous will deepfakes be in about a year or two, as algorithms continue to improve, and users get more skilled? 

Of course, like all innovations, deepfakes could be used for a positive end, like purely creative exercises and research, or actual services: the system developed by Lyrebird, for example, aims to help people who have lost their voice to illness, and the technology behind deepfakes is also the one that allowed for deceased actress Carrie Fisher to appear in Star Wars Rogue One. The “deepfakes” Redditor himself told Vice.com: “Every technology can be used with bad motivations, and it’s impossible to stop that […] The main difference is how easy [it is] to do that by everyone.”

Picture Source: https://screenrant.com

However, the dangers of deepfakes are largely exceed the positives. One has already materialized, with the now-available option of ordering online a fake video of someone you want to harm at a very cheap price – $20 to $30 per video seems to be the standard rate, for a delivery in only two days, according to an investigation by The Washington Post in December 2018 that also described how a 10-month-old deepfake site has been receiving 20,000 unique viewers daily. 

This elaborated form of harassment (an extension of revenge porn whereby a bad actor publishes intimate pictures of someone as payback or simply for lulz) has already made victims, such as American media critic Anita Sarkeesian, whose face was fraudulently inserted in a pornographic video as punishment for her feminist views on pop culture and video games; “For folks who don’t have a high profile, or don’t have any profile at all, this can hurt your job prospects, your interpersonal relationships, your reputation, your mental health,” Sarkeesian told The Washington Post. Or Indian investigative journalist Rana Ayyub who, in May 2018, described in an op ed for The New York Times how a pornographic deepfake video of her was weaponized on social media in an attempt to silence her. “I have been targeted by an apparently coordinated social media campaign that slut-shames, deploys manipulated images with sexually explicit language, and threatens rape,” she wrote. “Someone sent my father a screenshot of the video. He was silent on the phone while I cried […] I have no way of finding out who produced the video.”   

An infocalypse in the making?

And of course, political manipulation and propaganda by deepfakes present the most extreme risks, either by showing a politician saying or doing something that didn’t really happen; or, to the contrary, by providing an escape route to someone who could claim that a video showing them really doing something inappropriate is a forgery.

Not to mention the truly apocalyptic scenario in which deepfakes of politicians could start wars and global crises. Such an infocalypse – a term coined in 2016 by tech researcher Aviv Ovadya, also behind the non-profit Thoughtful technology Project – is something that even the US Defense Department worries about. Its Defense Advanced Research Projects Agency (DARPA) has been working on a “media forensics” (MediFor) program that would be able to automatically detect fake images, video or audio. Talking to the National Public Radio (NPR) in 2018, David Doermann, in charge of the program at the time, imagined a disaster scenario whereby a mass misinformation campaign using deepfakes would have the world believe in an imaginary major event. “That might lead to political unrest, or riots, or at worst some nations acting all based on this bad information,” he told NPR.

Researchers like Matthias Niessner, from Visual Computing Lab at Technical University Munich, have already started developing technologies that could detect fake videos. “We managed to train several neural networks that are indeed pretty good at figuring out forged images/videos […] Ideally, we’re imagining automated methods in a browser or social media platform to tell what’s fake and what’s real,” Niessner told Buzzfeed in 2018. However, the problem with neural networks is that they learn; so, logically enough, they can train themselves to evade detection when creating deepfakes… “Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” David Gunning, the current MediFor program manager, told MIT Technology Review in 2018. “We don’t know if there’s a limit. It’s unclear.”

But identifying a video is only the first step. What to do with a deepfake once it’s been found out? For example, in September 2018, Facebook started using its own deepfakes detection proprietary system but the social giant has not imposed a blanket ban on the videos, so how does it decide what’s acceptable and what’s not? What’s its policy?

Online and in real life, clear guidelines and legislations are thus needed to provide the appropriate legal framework. Popular platforms like Reddit, Twitter and Discord have now banned deepfakes; in the UK, they can be considered as harassment and be prosecuted as such; in the US, they can be charged under identity theft, cyberstalking and revenge porn status. But deepfakes are not considered, in themselves, as a specific crime and remain a vague object in legal terms, keeping the doors open to abuse and violations.

As artificial intelligence researcher Alex Champandard told Vice.com in 2017, “We need to have a very loud and public debate. Everyone needs to know just how easy it is to fake images and videos, to the point where we won’t be able to distinguish forgeries in a few months from now. Of course, this was possible for a long time, but it would have taken a lot of resources and professionals in visual effects to pull this off. Now it can be done by a single programmer with recent computer hardware.” And that was two years ago…

In an age when public distrust in media, politics and even simple facts is at its peak, video evidence was one of the few remaining ways to prove a point beyond any doubt. If even believing your eyes becomes questionable, the consequences could be devastating. 

This article is printed in Communicate’s June edition.