Deepfake Technology and its Perils
In late January 2024, social media users and internet denizens were greeted with shocking images depicting pop idol Taylor Swift engaged in sexually explicit acts. These compromising photos were quickly flagged, but as with all things on the Web, once they’re out, they’re out. Before Meta and X were able to filter the content, millions had already seen it. Unfortunately, Taylor Swift isn’t the only victim of AI-generated nonconsensual nudity. Aljazeera reports that a shocking 96% of all deepfake images on the Internet are pornographic in nature.
Rise of Deepfakes
Forgeries aren’t new. Criminals have been using imitation tactics to fool unsuspecting victims for centuries. But with the rise of easily accessible artificial intelligence (AI) tools, deception on a massive scale is well within any criminal’s reach, regardless of their technical ability. To make matters worse, deepfake quality continues to improve at an alarming rate, blurring the line between what’s real and what isn’t with breakneck speed. Thanks to AI tools’ impressive fidelity, a deepfake can be indistinguishable from the subject upon which it’s based.
Deepfake Dangers
AI-powered deepfakes pose a real threat to security. Bad actors can easily replicate the image and voice of those with influence to persuade people to act on false pretense. They can use their subject’s likeness to spread misinformation, defraud, and commit irreparable sabotage to public and private figures, alike. Or commit financial fraud or, sway an election.
Election Fraud
A recent YouGov poll suggested that 85% of American respondents have major concerns about how AI could impact elections. Turns out, government officials believe these worries are warranted, and agencies like the Cybersecurity and Infrastructure Security Agency have published documentation on how to spot deepfakes and what to do about them. The risk that generative AI poses to election security and integrity is a serious threat that public sector organizations must combat. There is growing concern about deep fake being used to influence and disinform the public. Political factions and even foreign enemies can use AI to weaponize social media.
The financial risk AI deepfakes pose to corporations and governments can’t be understated. In fact, even during the technology’s formative years, it was already costing companies thousands. Trend Micro and the Wall Street journal covered a story in 2019 detailing how a United Kingdom-based business lost nearly a quarter-million dollars after a cybercriminal impersonated their CEO’s voice, using AI audio tools. Unfortunately, this wasn’t an isolated incident because in 2024, the South China Morning Post covered a similar story on how one multinational firm out of Hong Kong lost $2.4 million in almost the same way. Experts predict the business world will see unimaginable growth in deepfake threats like these over the coming years, making AI a serious risk to the private sector for the foreseeable future.
Reputation ruin
What about when criminals use deepfakes for personal vendettas? A scorned lover, overzealous fan, or simply a malicious nobody can tarnish the reputation of their target. In 2021, a Pennsylvania mother was accused of creating deepfake content that depicted three teenage cheerleaders engaged in illicit behavior. Further investigation showed that she manufactured this content to prevent athletes from her daughter’s rival school from competing in an upcoming event. Although this story doesn’t feature a world-class criminal robbing a Fortune 500 company, the impact it had on those children was likely traumatic.
Politics
London Mayor Sadiq Kahn was recently embroiled in an AI scandal, where a criminal replicated the politician’s voice reading a script disparaging Remembrance Day and the pro-Palestinian movement. The BBC reported on the public outrage this incident caused, and how organizations antithetical to Kahn’s political leanings promoted the audio to their constituencies, leading to many outcries for the mayor’s resignation.
Suharto
Copycatting the voice or image of a living person can be scary. Resurrecting the dead to deliver a speech is even more terrifying, which is exactly what happened ahead of Indonesia’s 2024 elections. The political party Golkar used AI technology to generate a deepfake of Suharto, one of the country’s most infamous dictators and second president. The intent of the video was to serve as a reminder to the public about the importance of their votes, and what the upcoming elections mean for Indonesian people. However, there’s much controversy to this tactic, with many labeling it shock-value propaganda. CNN covered the story in detail, examining the video’s reception and the debate it led to on the ethics of AI-generated election material.
Defending Against a Convincing Copy
While complete eradication of AI mischief is impossible, managing the impact of AI technology’s latest bane is not.
Increased scrutiny
The first step to combat doctored content is to practice an increased level of skepticism. If something seems unusual or unbelievable, then it probably is. Security practitioners would do well to educate organizations and individuals on the tell-tale signs of a deepfake, along with offering guidance on how to react. Greater resources and training into identification will go a long way in mitigating risk.
Detection Tools
There are dozens of apps to detect AI in written materials such as those now used by teachers to determine if a given student’s book report was generated by ChatGPT, for example. Technology is advancing for detection of video fakes, as well. Intel last November announced its Real-Time Deepfake Detector, a platform for analyzing videos. One hopes that media broadcasters would want to ensure the legitimacy of their output by putting such filters to use.
Enhanced security
Businesses and governments should also consider fighting fire with fire. Many tech companies are using AI to create tools that can alert a user as to whether the content they’re ingesting was made using generative software. Factoring in ways to defend against deepfake threats should be a part of every organization’s vulnerability planning and assessment practices.
Personal accountability
Deepfakes exist mostly because AI systems have access to a wide catalog of visual and audio data on a person. Celebrities might struggle to keep their likeness private, but regular folks can limit the amount of videos, pictures, and recordings of themselves they share online. Or, perhaps more practically, they can adopt the appropriate privacy settings for their social media accounts, making it harder for bad actors to build a profile on them. Protecting one’s online identity will become even more essential, as AI deepfake technology grows more advanced.
A Fake New World
Like all technological breakthroughs, AI has the potential to change the world for the better. Unlike previous leaps forward, however, AI also gives more power to bad actors than ever before. Deepfakes are a reminder that everyone must do their part to promote security best practices and protect themselves from becoming victims.
If your organization is interested in learning more about how to manage the growing threat of deepfakes, consider signing up for one of our threat, risk, and vulnerability trainings here.