Deepfakes: Decoding and Defending An Introduction
In today’s digital age, information security is a top priority for organisations of all sizes. However, the rise of deepfakes has added a new layer of complexity to this issue. Deepfakes, the byproduct of cutting-edge artificial intelligence, have emerged as a potent threat, sparking concerns about misinformation, fraud, and reputational harm across various sectors.
Deepfakes are hyper-realistic forged media generated using powerful artificial intelligence capabilities. These AI manipulations are escalating concerns across organizations about disinformation, fraud, and reputational damage. In this context, this post explores deepfakes by understanding what they are, how they can impact an organisation, and best practices for identifying and defending against them.
Understanding Deepfakes: An Introduction
The term “deepfake” is a combination of “deep learning” and “fake”. These fakes are created using algorithms that are trained to mimic human behaviour. It’s a place where reality and fiction blur together, leaving us questioning what’s real and what’s not.
Deepfakes utilize deep learning techniques to manipulate existing audio, video, or images or generate new forged media from scratch. The algorithms analyse source media content to learn how to mimic qualities like facial expressions, lip movements, voice, tone and inflections. This mimicked data is then leveraged to create realistic fakes depicting events or speech that never actually happened.
While starting with celebrity face swapping videos, deepfakes now include dangerous impersonations of political leaders, executives, or employees.
The Rise and Potential Threats of Deepfakes
The potential uses of deepfakes are many, ranging from the entertainment industry to politics. But as these fakes become more and more realistic, they pose a significant threat to other organisations too. According to a report released from the cloud service firm VMware, deepfake attacks are on the rise [1].
“Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls,” said Rick McElroy, principal cybersecurity strategist at VMware. “Two out of three respondents in our report saw malicious deepfakes used as part of an attack, a 13% increase from last year, with email as the top delivery method.”
As the accessibility of deepfake creation grows, it introduces several critical risks, some of which are highlighted below with reference to recent examples:
- Social Engineering Fraud and Scams: Deepfakes help bypass security measures dependent on photos or videos for authentication. They can aid in identity theft, impersonating executives to initiate unauthorized transactions, or manipulating financial information. For example, in 2019 criminals used deepfake audio impersonations of a company executive to trick an employee into transferring $243,000 into a fraudulent account [2].
- Disinformation campaigns: State-sponsored or malicious actors can leverage deepfakes to spread fake news, influence opinions, or interfere in political processes. For instance, in 2018 a deepfake video of Gabon’s president Ali Bongo, who was secretly ill at the time, was spread to show him as healthy and working. This aimed to calm citizens and retain power [3].
- Corporate Espionage: Sensitive internal communications or meetings with customers/partners can be forged to extract competitive intelligence. In 2020, a deepfake video call duped an energy company employee into handing over confidential data worth millions to a competitor [4].
- Reputational damage: Realistic fake content can harm corporate or personal reputations and public trust. For example, in 2018, a deepfake video of Facebook CEO Mark Zuckerberg aired on CBS falsely depicting him boasting about controlling user data [5].
The Line of Defence: Deepfake Onslaught
Organizations require a multilayer strategy that include education, awareness and vigilance to detect, respond, and build resilience against deepfakes. While the following recommendations outline broadly applicable safeguards, they are not comprehensive, and additional industry- and organization-specific measures should be considered when designing a robust system of controls to protect against the diverse risks posed by deepfakes:
- Leverage AI Deepfake Detection Tools: Leverage technology to combat deepfakes. Several tech companies are developing deepfake detection software that uses machine learning algorithms to identify fake images and videos. Deepfake detection software and other technological solutions can be effective in identifying fake content and preventing it from causing harm. Companies like Sentinel, FakeCatcher and Deeptrace offer technologies that can analyse media and identify manipulation.
- Employee Training: First and foremost, Train employees on critical thinking to spot inconsistencies and suspicious activity through awareness programs. Deepfake videos may not perfectly sync lip movements with speech and lack natural eye movement and blinking. UC Berkeley offers online deepfake detection courses [6].
- Identification and Strict Validation: Implement stringent communication security measures, incorporating real-time identity verification methods which include liveliness testing for video call participants and mandatory 2FA with one-time passwords or PINs and utilize established biometrics to validate identities in sensitive communication channels. These measures aim to ensure the authenticity of individuals engaged in real-time activities.
The UK National Cyber Security Centre provides guidance on best practices for video conferencing authentication [7].
- Incident Response: Develop incident response plans for deepfake detection, personnel training, and crisis communications. The EU Commission released a Deepfake Detection Tool to support response planning [8].
- Foster Intelligence Sharing: Partner with industry groups like the Content Authenticity Initiative and experts like Sensity to share intelligence on evolving technologies and detection breakthroughs. Fund research efforts like the DARPA Semantic Forensics program into deep learning detection while advocating for societal awareness through education initiatives.
- Content Watermarking: Consider watermarking your images and videos. Watermarking adds a visible or invisible identifier to the content, making it harder for someone to manipulate it without getting caught. This way, even if someone does create a deepfake, the watermark can serve as proof of its inauthenticity.
As deepfake technology advances, combating disinformation requires a combination of technology, education, awareness, and collaboration. By implementing robust, tailored safeguards and promoting coordinated action, organizations can build resilience against this emerging threat. Remaining vigilant, informed, and proactive is key to defending against deepfakes in the digital age.
References:
[1]. VMware Report Warns of Deepfake Attacks. Link
[2]. A Voice Deepfake Was Used to Scam A CEO. Link
[3]. How misinformation helped spark an attempted coup in Gabon. Link
[4]. Deepfake Audio Steals US$243,000 From UK Company. Link
[5]. This Deepfake of Mark Zuckerberg. Link
[6]. New technique for detecting deepfake videos. Link
[7]. Video conferencing services: security guidance for organisations. Link
[8]. Tackling deepfakes in European policy. Link
[9]. Contextualizing Deepfake Threats to Organizations. Link
[10]. DARPA Semantic Forensics program. Link
[11]. Content Authenticity Initiative. Link
[12]. AI Powered Identity Verification. Link