Welcome to the research guide on Deepfake technologies! This guide will help you get started with finding resources on the topic of Deepfake technologies. It includes links to key library subscription resources, including article databases, journals and books, as well as open web content.
Make sure to cycle through the tabs above to discover more about the topic.
Ask a Librarian - During Regular Library Hours
The term "deepfake" comes from the underlying technology "deep learning," which is a form of AI. Deep learning algorithms, which teach themselves how to solve problems when given large sets of data, are used to swap faces in video and digital content to make realistic-looking fake media.
There are several methods for creating deepfakes, but the most common relies on the use of deep neural networks involving autoencoders that employ a face-swapping technique. You first need a target video to use as the basis of the deepfake and then a collection of video clips of the person you want to insert in the target.
The videos can be completely unrelated; the target might be a clip from a Hollywood movie, for example, and the videos of the person you want to insert in the film might be random clips downloaded from YouTube.
The autoencoder is a deep learning AI program tasked with studying the video clips to understand what the person looks like from a variety of angles and environmental conditions, and then mapping that person onto the individual in the target video by finding common features.
Another type of machine learning is added to the mix, known as Generative Adversarial Networks (GANs), which detects and improves any flaws in the deepfake within multiple rounds, making it harder for deepfake detectors to decode them.
GANs are also used as a popular method for creation of deepfakes, relying on the study of large amounts of data to "learn" how to develop new examples that mimic the real thing, with painfully accurate results.
Several apps and softwares make generating deepfakes easy even for beginners, such as the Chinese app Zao, DeepFace Lab, FaceApp (which is a photo editing app with built-in AI techniques), Face Swap, and the since removed DeepNude, a particularly dangerous app that generated fake nude images of women.
A large amount of deepfake softwares can be found on GitHub, a software development open source community. Some of these apps are used for pure entertainment purposes — which is why deepfake creation isn't outlawed — while others are far more likely to be used maliciously.
Many experts believe that, in the future, deepfakes will become far more sophisticated as technology further develops and might introduce more serious threats to the public, relating to election interference, political tension, and additional criminal activity.
Johnson, D. (2021, January 22). What is a deepfake? Everything you need to know about the AI-powered fake media. Business Insider.
Have you seen Barack Obama call Donald Trump a “complete dipsh*%”, or Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician’s mouth, star in your favourite movie, or dance like a pro? Then it’s time to make a deepfake.
Many are pornographic. The AI firm Deeptrace found 15,000 deepfake videos online in September 2019, a near doubling over nine months. A staggering 96% were pornographic and 99% of those mapped faces from female celebrities on to porn stars. As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.
Sample, I. (2020, January 13). What are deepfakes – and how can you spot them? The Guardian.
A deep-learning system can produce a persuasive counterfeit by studying photographs and videos of a target person from multiple angles, and then mimicking its behavior and speech patterns.
Barrett explained that “once a preliminary fake has been produced, a method known as GANs, or generative adversarial networks, makes it more believable. The GANs process seeks to detect flaws in the forgery, leading to improvements addressing the flaws.”
And after multiple rounds of detection and improvement, the deepfake video is completed, said the professor.
According to a MIT technology report, a device that enables deepfakes can be “a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections.”
In fact, “AI tools are already being used to put pictures of other people’s faces on the bodies of porn stars and put words in the mouths of politicians,” wrote Martin Giles, San Francisco bureau chief of MIT Technology Review in a report.
He said GANs didn’t create this problem, but they’ll make it worse.
Shao, G. (2019, October 13). What ‘deepfakes’ are and how they may be dangerous. CNBC.
Deepfakes came to prominence in early 2018 after a developer adapted cutting-edge artificial intelligence techniques to create software that swapped one person's face for another. The process worked by feeding a computer lots of still images of one person and video footage of another. Software then used this to generate a new video featuring the former's face in the place of the latter's, with matching expressions, lip-synch and other movements.
Since then, the process has been simplified - opening it up to more users - and now requires fewer photos to work.
Some apps exist that require only a single selfie to substitute a film star's face with that of the user within clips from Hollywood movies.
But there are concerns the process can also be abused to create misleading clips, in which a prominent figure is made to say or act in a way that never happened, for political or other gain.
Early this year, Facebook banned deepfakes that might mislead users into thinking a subject had said something they had not. Twitter and TikTok later followed with similar rules of their own.
Microsoft's Video Authenticator tool works by trying to detect giveaway signs that an image has been artificially generated, which might be invisible to the human eye.
These include subtle fading or greyscale pixels at the boundary of where the computer-created version of the target's face has been merged with that of the original subject's body.
To build it, the firm applied its own machine-learning techniques to a public dataset of about 1,000 deepfaked video sequences and then tested the resulting model against an even bigger face-swap database created by Facebook.
Kelion, D.(2020, September 1). Deepfake detection tool unveiled by Microsoft. BBC News.