Deepfakes, hyper-realistic videos or audio fabricated with AI, are making it harder than ever to tell what's real. Examples of AI-generated media are countless and coming at us fast.
A few weeks ago, a Maryland school principal Eric Eiswert was targeted: a deepfake impersonated his voice, leading the public to believe he’d made racist and antisemitic comments. Last year, a terrified mother received a fake phone call from her “daughter” begging her to pay ransom. A few months ago, U.S. citizens received robocalls, supposedly from Joe Biden, telling them not to vote.
In January, several 4chan users participated in a challenge to create sexually explicit images of Taylor Swift with AI. And in the last few weeks, Drake dropped a diss track aimed at Kendrick Lamar with verses featuring an AI-generated Tupac Shakur. (Shakur’s estate immediately sent a cease and desist, which Drake abided by.)
Deepfakes have the potential to affect anyone in any industry at any time. The increasing sophistication of fake videos and audio is eroding trust, fueling misinformation, and potentially letting bad actors off the hook with plausible deniability.
Misinformation and legal challenges
In 2023, the US Federal Trade Commission warned: “Thanks to AI tools that create ‘synthetic media’ or otherwise generate content, a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference.”
The law is struggling to keep up with this tech explosion. Copyright laws might not protect you from someone making a compromising deepfake of you, or posing as you for art or entertainment. Is an AI-generated song that sounds like Lizzo plagiarism or a new art form?
The debate over regulating deepfakes is just beginning. It’ll be hard to ban them altogether. And what about deepfakes created in other countries? Who legislates those? One thing is for sure: deepfakes are here, and we need to have a serious conversation about how to mitigate their risks.
Strong laws can protect people and intellectual property, while fact-checking tools and media literacy initiatives can help us spot deepfakes. And companies like Truepic and Sensity are developing deepfake detection tech.
While you can’t be 100% sure you can detect a deepfake, there are steps you can take to avoid getting scammed in real time. This article highlights the dangers of real-time deepfakes, which can be used to impersonate friends, family, and colleagues in phone calls and video chats.
It offers some ways to protect yourself from being scammed:
Be skeptical: Develop a healthy skepticism towards media you see online. Question what you see or hear, especially if it involves requests for money or personal information.
Video checks: In videos, look for red flags like unnatural blinking, mismatched hair or eyebrows, or inconsistencies in skin tone. If you’re speaking to someone who’s asking you for money, ask the person to turn their head or put a hand in front of their face - deepfakes may struggle with these movements.
In-person meetings: When possible, meet in person to verify someone’s identity if money is changing hands.
Code words: Establish a code word with family and friends that only you know. If someone claiming to be a relative calls and asks for money, use the code word to verify their identity.
Deepfake technology is constantly evolving, so staying vigilant and maintaining healthy skepticism is crucial, whether you’re speaking to someone on the phone or watching a video on a news website.
Read more about deepfakes: Media literacy in the age of deepfakes (MIT)