Deepfakes and the Age of Tech Illusions: Decoding Digital Deception
Unveiling the Truth: Navigating the Intricate Mesh of Deepfake Technology
“The ideologists of the USSR believed there could only be one truth. So in fact Generation П had no choice in the matter and children of the Soviet seventies chose Pepsi in precisely the same way as their parents chose Brezhnev.”
– Victor Pelevin, Generation "П"
I was born in the Soviet Union. You read it. My seven-word story is what Victor Pelevin’s “ Generation П" opening lines are about.
In the Soviet Union was the Truth, and the Truth was with the Regime, and the Truth was the Regime. We, the soviet people, had no choice but to live in the greatest Deepfake and dream about a better future!
In a world shrouded in deception from every angle, we didn’t need “The Truth Machine”, an idea brought to life by James L. Halperin in his acclaimed novel, that would eliminate dishonesty from our society. All we needed was Freedom.
We were “free” but we lived in prison. The faithful ministers of the grand Deepfake temple persistently affirmed that we were the most sovereign people in the world.
Do you know what fake freedom tastes like?
Do you know what color fake values are?
I know…
Within the walls of our lives, where we were obliged to consume the false essence of liberty, the concept of freedom was really linked with Pepsi and Coca-Cola (rest assured this represents reality and is not merely a product of Pelevin’s creative mind). The characters in Pelevin’s narrative made Pepsi their choice. To me, they represented a bridge generation, seeking a sip of freedom in each drop of this coveted beverage.
Everything was fake, even the smiles on the Soviet Posters.
Everything was an illusion, including the reports that local governments were dispatching to the central authorities.
Everything was fake.
The annals of history were meticulously altered, with only the narrative crafted by the regime being enshrined as the unassailable truth.
We lived in a mix of “Animal Farm” and “1984”.
This article is not about how bad the Soviet Union was, which indeed was, this article is about how Deepfakes and fake news pose a grave threat to democracy by eroding trust in the information marketplace and making it difficult to distinguish between truth and falsehood. It wouldn’t be an overstatement to assert that humanity, at no juncture in history, has been privy to the sheer volume of information it has access to today.
These are dangerous times and with the advent and democratization of synthetic media tools and technologies, the situation seems to be out of control.
Recently, Ann Applebaum penned a compelling essay, where she suggests that we inhabit a world where “There Are No Rules”. And in a world with no rules, where technological dystopia is ramping up, technology itself could be the only solution.
We’re going to look into Deepfake technology and explore the rise of fake media through advanced technology and the crisis of misinformation.
Understanding Deepfakes
Ours, too, is an age of propaganda. We excel our ancestors only in system and organization: they lied as fluently and as brazenly.
- C.L.R James, The Black Jacobins
Just as the past had its age of propaganda, today we find ourselves ensnared in a similar web.
When the pioneers of photography unveiled the first films, they likely didn’t foresee that by the dawn of the 21st century, humanity would grapple with unprecedented challenges. The fruits of their years of labor and dedication could be digitally replicated in mere seconds, seemingly conjured from thin air.
Derived from “deep learning” and “fake”, a deepfake leverages artificial intelligence to manipulate visual and audio content. The goal is to generate media that is so convincing it can easily deceive someone into believing that the events depicted actually occurred.
📣 Attention: your beliefs are under threat!
People’s perception of reality, often guided by first-hand accounts and media such as photographs and videos, is being challenged by the rise of deepfakes. These artificially generated media can distort the truth, misrepresent events, or even fabricate events that never occurred.
The well-known adage, “Trust your eyes, not your ears,” has been rendered obsolete. Primarily, the advent of Deepfakes has shattered our faith in visual evidence! The initial exposure of Deepfake videos sent shockwaves around the globe. Audiences were left spellbound by hyper-realistic videos featuring Barack Obama claiming that “Ben Carson is in the sunken place”; Tom Cruise humorously discussing Mikhail Gorbachev and polar bears; Richard Nixon delivering his contingency speech about the Apollo 11 disaster; I have only scratched the surface of the many examples that exist. It was through a gradual and steadfast journey that we unlocked the gateway into the Infocalypse.
In her acclaimed book “Deep Fakes: The Coming Infocalypse”, the author, Nina Schick posits that we are in the midst of an “Infocalypse”. She attributes this to a world inundated with misinformation, the surge of alternative media sources, the diminishing value of expertise, and the decline of traditional arbiters of information such as news media and official sources.
🤔 I understand that you may be wondering why I am so worried about the unchecked use of this technology, and why it evokes in me echoes of the ominous Temple of the Evil Empire. I’ll try to explain…
Deepfake technology can be used to create realistic videos or photos of people who appear to be saying or doing things that they never actually said or did. In today’s digital landscape, malevolent actors exploit disinformation campaigns and deepfake content to distort public perception of events, sway political landscapes, perpetrate fraud, and manipulate corporate stakeholders. The advent of deepfakes has escalated the risk landscape, with many organizations now viewing them as a more significant threat than identity theft, a crime for which deepfakes can also be utilized. A recent report by University College London (UCL) underscores this sentiment, ranking Deepfake technology among the most formidable threats confronting society today:
“Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.”
This technology has existed for decades, but it used to require entire studios full of experts and a significant amount of time to create these effects. Now, with the advent of machine learning and artificial intelligence, these images and videos can be synthesized much more quickly.
Deepfakes rely heavily on machine learning. This technology has enabled people to make Deepfakes much quicker and cheaper than before. To create a deepfake video of someone, the first step is to train a neural network using hours of authentic footage of that person. This gives the network a thorough grasp of how the person looks from multiple angles and lighting conditions. The creator then uses the trained network together with computer graphics techniques. This allows them to overlay a synthetic copy of the person onto a different actor's body. (We’re going to unveil the details of the technology a bit later).
NOTA BENE
Although AI speeds up deepfake creation compared to previous methods, the process still requires significant time to produce a convincing composite that depicts someone in a completely fabricated scenario.
It’s important to note that while deepfakes can be incredibly convincing, they’re not perfect. There are often small inconsistencies or errors that can give them away, such as unnatural blinking patterns or slight distortions in the background. As deepfake technology continues to evolve, so too do the methods for detecting them.
To better understand the concept of deepfakes, we need to explore its origins and purpose.
📌 Origin, Evolution, and the Purpose of Deepfakes
Origin of Deepfakes
The specific term “deepfake” came into the limelight in 2017, introduced by an anonymous Reddit user named “Deepfakes”. This individual exploited Google’s open-source, deep-learning technology to fabricate and share manipulated pornographic videos. In these videos, the faces of famous actresses, including Scarlett Johansson, were seamlessly overlaid onto bodies of porn actors. According to some experts, an overwhelming 95% of existing deepfakes are pornographic. These videos are often used as tools of harassment, showcasing individuals in non-consensual, fabricated sexual scenarios:
Mort, a UK-based poet and broadcaster, was targeted in a fake pornography campaign. The images used were sourced from her private social media accounts, including a deleted Facebook profile, and spanned from 2017 to 2019. These non-explicit photos were manipulated and her face was edited into violent pornographic images. While some edits were poorly done, others were disturbingly realistic.
Although the initial Deepfakes weren't sophisticated, the technology underpinning them rapidly evolved.
But to truly trace the origin of the technology that powers deepfakes, we must venture back a few decades.
DID YOU KNOW?
Deepfakes may use new technology, but they’re based on an old idea. In the 1890s, the Edison Manufacturing Company aimed to showcase the potential of motion pictures by filming the Spanish-American War. Due to the bulky nature of 19th-century cameras, close-up combat filming was challenging. As a solution, the company mixed staged footage of American victories with real footage of soldiers and weaponry. This blend, unbeknownst to American viewers, fueled patriotic sentiments.
One of the foundational pillars in this domain is a paper written in 1997 by Christoph Bregler, Michele Covell, and Malcolm Slaney. Their innovation, the Video Rewrite Program, could craft new facial animations purely based on audio input, paving the way for future advancements in the world of deepfakes.
Progress continued at a brisk pace. Fast forward to 2018, and we witness UC Berkeley researchers showcasing the ability of deep learning to transpose dance moves from seasoned professionals onto novices. That same year, under the leadership of Dr. Björn Ommer, a team from Heidelberg University in Germany embarked on a mission to render human movements with an unmatched level of realism. By 2021, Japanese AI firm Data Grid took a giant leap, unveiling an AI capable of generating full-body models of non-existent individuals – a feat with vast potential, including in industries like fashion.
In 2019, The Wall Street Journal highlighted an incident where the CEO of a UK energy company was deceived into transferring $243,000 to a Hungarian supplier’s account. The CEO was under the impression that he was communicating with his superior, who seemed to have approved the transaction. The executive now suspects that he fell prey to an audio deepfake scam, commonly known as vishing. Farid, an expert in the field, anticipates that other fraudulent financial schemes involving deepfakes, potentially including full-body deepfakes, are imminent.
The most recent notable example is the alleged use of a deepfake video by Russia to justify its invasion of Ukraine in 2022. The potential for deepfakes to intensify conflicts, undermine trust in institutions, and stir up emotions presents a significant challenge to law enforcement.
WHAT YOU NEED TO KNOW
“Warnings posted by the Ukrainian Center for Strategic Communications and Information Security on Facebook, urge soldiers and civilians not to believe any video they see showing President Zelensky announcing a surrender.”
Source: Ukraine warns Russia may deploy deepfakes of Volodmyr Zelensky surrendering
This is compounded by a general lack of public awareness about deepfakes; a 2019 UK survey found that almost 72% of respondents were unaware of deepfakes and their impact. Despite increasing awareness, recent studies suggest that people’s ability to detect Deepfakes does not necessarily improve.
If you navigate to X, previously known as Twitter, you will promptly comprehend the matter at hand. There was a time when Twitter was viewed as a stage for significant political discourse. However, since Elon Musk’s involvement, the platform’s dynamics have experienced a shift, often veering towards a negative trajectory.
Consider the Israel-Hamas conflict as an illustration. In a world where, as Anne Applebaum aptly suggests, there are no rules, the potential misuse of deepfake technology for spreading misinformation becomes even more evident. Both prominent figures—unfortunately, not a new phenomenon in our current turbulent media climate—and obscure 'Blue subscribers' (whether bots or paid sock-puppets) have become adept at disseminating fake news and deepfake videos.
This was bold 👇
“A fake BBC video claiming a Bellingcat investigation shows Ukraine smuggled weapons to Hamas is being pushed by Russian social media users. It's unclear if this is a Russian government disinformation campaign or a grassroots effort, but it's 100% fake”
Source: Eliot Higgins
Well, the illustration below is just one example of what the platform looks like now.
Commenting on the situation, David Frum, the author of the book "Dead Right" and a staff writer at the Atlantic stated, "The destruction of Twitter's verification system by Elon Musk was beyond irresponsible."
His words ring true, do they not?
Conversely, BBC journalist Shayan Sardarizadeh has been diligently debunking misinformation related to the Israel-Hamas conflict through a series of daily tweets.
The impact of “fake news” on the market is evident. For instance, a false tweet from AP about a bombing at the White House led to a market panic. This was due to automated trading algorithms reacting to keywords. Imagine the potential chaos if a fake video or audio clip was leaked, falsely showing a CEO revealing a major flaw in their product, or disclosing an incurable illness. The company’s stock price could plummet before the misinformation could be corrected.
But as with any tool, the essence of deepfakes – the power to manipulate reality – can be harnessed for both noble and nefarious purposes. And it's crucial for us to understand both as we step into an age where seeing is no longer believing.
The purpose of Deepfakes: A New Frontier for Crime and Deception
As mentioned earlier, Deepfake technology leverages deep learning to manipulate audio and visual content, convincingly depicting individuals saying or doing things they never did, or even creating non-existent people. This technology is transforming how we perceive recorded media. The key advancements behind deepfakes are deep learning and generative adversarial networks. The advent of 5G technology could further facilitate the use of deepfakes.
Deepfake technology, while having potential for positive uses, has unfortunately been exploited for harmful purposes as well. Here are a few examples:
Pornography: Deepfakes first gained public attention in 2017 when an anonymous Reddit user posted videos that superimposed the faces of celebrities onto porn actors. This invasion of privacy has the potential to cause significant embarrassment and distress.
Fake News and Hoaxes: Deepfakes have been used to create fake news and hoaxes, manipulating public opinion and causing confusion.
Bullying: Deepfakes can be used to bully individuals by creating embarrassing or harmful content featuring their likeness.
Financial Fraud: Deepfakes could potentially be used in financial fraud, such as creating fake endorsements by celebrities or business leaders.
Phishing Scams: By creating convincing audio or video content, deepfakes can be used in phishing scams to trick individuals into revealing sensitive information.
Political Manipulation: In one instance, researchers were able to create a deepfake video of President Barack Obama saying whatever they wanted him to say. This technology could be abused to manipulate political discourse.
The Purpose of Deepfakes: A Powerful Tool for Good
Deepfake technology, while often associated with misinformation and manipulation, has numerous beneficial applications in entertainment, human rights, and accessibility.
Entertainment: Deepfakes have been used in the film industry to enhance visual effects. For instance, they were used in recent Star Wars films to recreate characters such as Grand Moff Tarkin and a young Princess Leia. Deepfakes can also be used in satire, as seen in a Star Trek parody featuring the faces of Jeff Bezos and Elon Musk.
Human Rights: Deepfakes can play a crucial role in protecting individuals at risk. The documentary “Welcome to Chechnya” by David France is a prime example. It uses deepfake technology to protect the identities of LGBTQ activists facing persecution in Chechnya. The technology allows viewers to see the subjects’ emotions without revealing their true faces.
Accessibility: Deepfakes can improve accessibility for individuals who have lost their voice due to illness, injury, or disability. VocaliD, an AI-based company, uses this technology to recreate a user’s voice from old recordings for text-to-speech technology. This allows individuals to regain their unique voice or choose a new one that best fits their personality from a bank of options.
These examples highlight the potential of deepfake technology when used responsibly and ethically.
Deepfakes and Deep Learning
Deepfakes primarily leverage a type of machine learning known as deep learning. Deep learning models, especially neural networks, are inspired by the structure and function of the human brain. They use interconnected nodes (akin to neurons) arranged in layers to process and learn from data.
Generative Adversarial Networks (GANs)
The heart of deepfake technology is a specific kind of deep learning architecture called Generative Adversarial Networks (GANs). A GAN consists of two parts:
Generator: This part tries to create data. For deepfakes, it would attempt to generate a video or image that looks real.
Discriminator: This part tries to distinguish between real data and fake data produced by the generator.
The two parts of the GAN are in a continuous loop of competition:
The generator creates a new piece of data (e.g., a fake video).
The discriminator evaluates it. If it can tell it's fake, it sends feedback to the generator.
Using this feedback, the generator tries again to create a more convincing fake.
Over time and with enough data, the generator gets better and better at producing realistic content, to the point where the discriminator (and often humans) can't tell the difference between real and fake.
Training Process
To create a deepfake of someone, you'd typically feed a GAN lots of images or videos of that person. The more data, the better. This allows the model to learn the nuances of the individual's facial expressions, voice (if audio is involved), and other idiosyncrasies.
End Result
Once adequately trained, the generator can produce realistic video or audio clips that can be superimposed onto other content or stitched together to create entirely new content. This is how deepfakes can make it seem like someone said or did something they never did.
While the technology behind deepfakes is impressive, it's essential to use it responsibly given its potential for misuse. As the tech becomes more accessible, efforts are also being made to develop tools to detect Deepfakes to help counteract malicious uses.
🔍 Detection and Countermeasures
Challenges in Detecting Deepfakes: The swift advancement of deepfake technology poses a significant hurdle in distinguishing genuine content from manipulated counterparts. While low-grade deepfakes exhibit clear flaws like improper lip-syncing or inconsistent skin tones, top-tier deepfakes seamlessly mimic reality, undermining the trustworthiness of video and audio as authentic evidence.
Tools and Techniques for Detecting Deepfakes: The tech community is relentlessly working on an arsenal of tools to detect deepfakes. AI-driven platforms, such as Sentinel, probe digital media for any trace of AI manipulation. Intel’s Real-Time Deepfake Detector, dubbed "FakeCatcher," stands out as one such formidable tool. Various machine learning and deep learning models, including support vector machines, random forests, multilayer perceptrons, k-nearest neighbors, and convolutional neural networks paired with long short-term memory (LSTM), are also on the frontline of this battle against deepfakes.
Role of Blockchain and Metadata in Content Verification: Blockchain technology offers a beacon of hope in restoring trust to our digital landscape. Its decentralized, tamper-proof ledger persistently records and verifies information, making post-creation alterations nearly impossible. Publications can harness blockchain to maintain a verifiable registry of all disseminated images. This registry can encompass essential details like captions, exact locations, consent statuses, copyright ownership, and more. In essence, it bestows a unique digital fingerprint on content, ensuring its authenticity.
In Conclusion: The deepfake detection landscape is a dynamic battleground, with the sophistication of deepfakes on one side and the tenacity of detection tools on the other. While the challenges are palpable, the emerging tools, coupled with blockchain and metadata innovations, provide a promising horizon in preserving digital content's integrity.
🚧 Deepfakes: Ethical Challenges and Considerations
The Ethical Quandary of Deepfakes: Deepfakes introduce a multitude of ethical dilemmas. One primary concern is the unauthorized replication of an individual's likeness, infringing on personal rights. Notably, the initial wave of deepfakes, which merged genuine visuals with adult content, misused an individual’s image for unsolicited erotic gratification. Even if such manipulated content never directly affects the person involved, it remains morally problematic. Essentially, their image becomes a non-consensual source of amusement and pleasure.
Legal Ambiguities and Responses: Navigating the legal ramifications of deepfakes is intricate. Several potential frameworks, such as copyright laws, right of publicity, section 43(a) of the Lanham Act, along with torts encompassing defamation, false light, and intentional infliction of emotional distress, can be leveraged against deepfakes. Various states are already crafting specific regulations addressing the menace, and on a global scale, the European Union has fortified its Code of Practice to counteract deepfake-fueled disinformation.
Accountability of Tech Developers: The onus of ethical application of deepfakes significantly lies with its developers and disseminators. Tech behemoths, including Microsoft, Google, and Amazon, which equip users with the tools and computational prowess to craft deepfakes rapidly and on a large scale, bear a significant ethical responsibility. It's imperative for these experts, well-versed in the intricacies of deepfake technology, to steer the field towards harnessing the potential benefits of synthetic media while neutralizing its detrimental impacts.
Accountability of the End-User: While individual responsibility is vital in sharing and absorbing digital content, placing the burden primarily on the end-user to discern and counteract malicious deepfakes might be unrealistic due to the sophisticated nature of the deception. Nonetheless, users must exercise discernment in content dissemination. Sharing misleading content, even inadvertently, with one's network should be accompanied by accountability and corrective actions.
💥 Societal Impacts of Deepfakes
Deepfakes, born from machine-learning systems that can manipulate media content, exert a profound influence on society. Their impact can be categorized as follows:
Distorting Democratic Discourse: By spreading misinformation, deepfakes have the potential to change emotions, attitudes, and behaviors, thereby undermining the very pillars of democracy.
Manipulating Elections: There's a risk of deepfakes being weaponized to spread false narratives about candidates or events, influencing electoral outcomes.
Eroding Trust in Institutions: The authenticity of journalism and public institutions can be questioned due to deepfakes. As these manipulations become more convincing, there's a looming danger of a widespread mistrust in news sources and even the veracity of images and videos on social media.
Jeopardizing Public Safety and National Security: Beyond just misleading narratives, deepfakes have the potential to incite violence or create false alarms, posing threats to safety and security.
Damaging Reputations: On an individual level, deepfakes can cause harm by conjuring false scandals or defaming personalities.
In this age, the adage "seeing is believing" stands challenged. Deepfakes blur the line between reality and fiction, making it increasingly challenging for individuals to discern fact from fabrication. This uncertainty has grave implications for journalism, politics, and personal interactions.
The proliferation of deepfakes amplifies the urgency for media literacy education. Such education can enlighten the public about this ever-evolving terrain, ensuring they are equipped with the analytical skills required to navigate it. By fostering a critical approach to media – understanding misinformation threats, analyzing new deceptive techniques, and recognizing the potential of emerging technologies for positive change – we can bolster societal resilience against these challenges.
In tandem with media literacy, there's a pressing need for technological countermeasures, such as advanced digital forensics, and robust regulatory frameworks to mitigate the spread and influence of deepfakes.
To conclude, while deepfakes pose formidable societal challenges, through a combined approach of education, technological innovation, and regulation, we can chart a course through this complex landscape.
CTRL + END
In the Soviet Union, synthetic media wasn’t an issue, despite the regime essentially embodying the core concept of DeepFake. The regime’s mantra could be summarized as:
Fabricate everything, rewrite history!
Maintain control over all individuals and aspects of life!
Use blackmail and terror to suppress dissent and common sense!
There exists only one truth, and it is disseminated by the Party.
The Party serves as the high priest of this ultimate truth.
Personally, as someone who was born in the Soviet era, I’m terrified by the advent of this technology and mostly by its potential misuse. But I think there’s still hope. Let’s examine the following image:
Figure 1 (taken from the research paper Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News ) indicates that exposure to either the 4-second or 26-second deceptive deepfakes did not significantly increase the likelihood of deception compared to the full video with an educational reveal. The 4-second deceptive video was least likely to deceive (14.9%), followed by the 26-second deceptive video (16.4%) and the full video with educational reveal (16.9%). However, these differences were not significant in a logistic regression model. Therefore, the hypothesis that individuals exposed to an unrevealed false statement in a deepfake political video are more likely to believe it, is rejected.
As is often the case, we need to find a balance and implement proper regulations promptly. We can’t afford any more mistakes. We are engaged in a strategic game with a formidable adversary, akin to playing chess against a monster while under ‘zeitnot’, a situation in chess where a player has very little time left on the clock.
And… Given the amplification of incendiary content by social media platforms, and their apparent inability or lack of willingness to moderate their own content (X, previously known as Twitter is a perfect example these days), it becomes imperative for governments to intervene and regulate deepfake technology.
Ironically, we live in a “fucked-up dystopia” ¹, and it is the duty of responsible bodies and institutions to enlighten citizens about the presence of this technology. We must adapt our senses, training our eyes and ears to approach online content with skepticism, understanding that not everything we encounter on the internet should be taken at face value.
I think we all remember what happened in October 2017, when Russian forces targeted the cellphones of NATO troops in an attempt to gather intelligence during exercises in Poland. It’s not a far stretch to envision Moscow transmitting deceptive or perplexing commands, as if they were coming from the soldiers’ own superiors. Any hesitation in response could potentially offer a strategic edge during a crisis.
We find ourselves in an era of unprecedented challenges. The deluge of information has breached all boundaries. Modern technologies have made it increasingly convenient for special services to intrude into private lives, enabling surveillance, blackmail, terror, and various forms of manipulation and defamation. Consequently, the demands on an informed individual have surpassed those of Generation П, or Homo Zapiens ². It’s time for us to set aside our dreams and forge our future with unwavering resolve.
"Love what you read? ☕ Support The AI Observer by buying a coffee! Each sip powers the insight. Support Here
Reference:
¹ Term coined by Schick, in her “Deep fakes and the infocalypse: What you urgently need to know”, 2020
² Victor Pelevin’s novel “Generation П” is also known as “Homo Zapiens” and “Babylon” in English translations
Wow, I’m speechless, Nat! You're doing a great job with this article - the way you blend personal perspective, facts, and analysis to explore the complex topic of deepfakes is working really well. Phenomenal writing .
As of Summer 2021, the capability to produce convincing deepfakes was predominantly the domain of specialized tech companies. However, the advent of DALL-E 2, Midjourney, and Stable Diffusion democratized this ability, making it accessible to a wider audience.
More recently, we've witnessed human-like text2speech that is (a) fast and (b) affordable [0]. We already know that fraudsters leverage the latest text2speech breakthroughs to launch large-scale attacks.
What we face now is a pressing need to develop robust countermeasures capable of withstanding the challenges posed by such sophisticated, scalable technologies.
[0] Commercial https://play.ht + https://elevenlabs.io
Open-source https://github.com/Plachtaa/VALL-E-X