Getty Images
The viral spread of manipulated videos, often involving AI, is forcing people to distrust what they see and hear.
Subscribe to the Real Truth for FREE news and analysis.
Subscribe NowThe phone rings. It is the U.S. Secretary of State calling. Or is it?
For Washington insiders, seeing and hearing is no longer believing, thanks to a spate of recent incidents involving deepfakes impersonating top officials in President Donald Trump’s administration.
Digital fakes are becoming more widespread for corporate America, too, as criminal gangs and hackers associated with adversaries including North Korea use synthetic video and audio to impersonate CEOs and low-level job candidates to gain access to critical systems or business secrets.
Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, causing security problems for governments, businesses and private individuals and making trust the most valuable currency of the digital age.
Responding to the challenge may require laws, better digital literacy and technical solutions that fight AI with more AI.
“As humans, we are remarkably susceptible to deception,” said Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security. But he believes solutions to the challenge of deepfakes may be within reach: “We are going to fight back.”
This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a U.S. senator and a governor over text, voice mail and the Signal messaging app. In May, someone impersonated Mr. Trump’s chief of staff, Susie Wiles.
Another phony version of Mr. Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.
The national security implications are huge: People who think they are chatting with Mr. Rubio or Ms. Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy.
“You’re either trying to extract sensitive secrets or competitive information or you’re going after access, to an email server or other sensitive network,” Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations.
Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state’s upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI.
Their ability to deceive makes AI deepfakes a potent weapon for foreign actors. Both Russia and China have used disinformation and propaganda directed at Americans as a way of undermining trust in democratic alliances and institutions.
Steven Kramer, the political consultant who admitted sending the fake Biden robocalls, said he wanted to send a message of the dangers deepfakes pose to the American political system. Mr. Kramer was acquitted in June of charges of voter suppression and impersonating a candidate.
“I did what I did for $500,” Mr. Kramer said. “Can you imagine what would happen if the Chinese government decided to do this?”
Scammers Target the Financial Industry
The greater availability and sophistication of the programs mean deepfakes are increasingly used for corporate espionage and garden variety fraud.
“The financial industry is right in the crosshairs,” said Jennifer Ewbank, a former deputy director of the CIA who worked on cybersecurity and digital threats. “Even individuals who know each other have been convinced to transfer vast sums of money.”
In the context of corporate espionage, they can be used to impersonate CEOs asking employees to hand over passwords or routing numbers.
Deepfakes can also allow scammers to apply for jobs—and even perform them—under an assumed or fake identity. For some, this is a way to access sensitive networks, to steal secrets or to install ransomware. Others just want the work and may be working a few similar jobs at different companies at the same time.
Authorities in the U.S. have said that thousands of North Koreans with information technology skills have been dispatched to live abroad using stolen identities to obtain jobs at tech firms in the U.S. and elsewhere. The workers get access to company networks as well as a paycheck. In some cases, the workers install ransomware that can be later used to extort even more money.
The schemes have generated billions of dollars for the North Korean government.
Within three years, as many as one in four job applications is expected to be fake, according to research from Adaptive Security, a cybersecurity company.
“We’ve entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person,” said Brian Long, Adaptive’s CEO. “It’s no longer about hacking systems—it’s about hacking trust.”
Experts Deploy AI to Fight AI
Researchers, public policy experts and technology companies are now investigating the best ways to address the economic, political and social challenges posed by deepfakes.
New regulations could require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers could also impose greater penalties on those who use digital technology to deceive others—if they can be caught.
Greater investments in digital literacy could also boost people’s immunity to online deception by teaching them ways to spot fake media and avoid falling prey to scammers.
The best tool for catching AI may be another AI program, one trained to sniff out the tiny flaws in deepfakes that would go unnoticed by a person.
Systems like Pindrop’s analyze millions of datapoints in any person’s speech to quickly identify irregularities. The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance.
Similar programs may one day be commonplace, running in the background as people chat with colleagues and loved ones online. Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Mr. Balasubramaniyan, Pindrop’s CEO.
“You can take the defeatist view and say we’re going to be subservient to disinformation,” he said. “But that’s not going to happen.”
Yet these efforts would only address the effects of deepfakes. Addressing the underlying cause is much more difficult.
Nothing New
In the Old Testament book of Ecclesiastes, King Solomon stated: “The thing that has been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun” (1:9).
Though Solomon never saw a video, let alone one created with Sora, Adobe Firefly or other AI tools, during his time ruling ancient Israel, deepfakes are just the latest iteration of a long history of people deceiving others.
The world’s first photograph was taken in 1826. Only 20 years later, a person in a photo negative was painted over and subsequently blocked from the printed image. Throughout the rest of the 19th century, wealthy clients and magazine editors retouched images with increasing sophistication.
Image editing started out with good intentions. “Most of the earliest manipulated photographs were attempts to compensate for the new medium’s technical limitations—specifically, its inability to depict the world as it appears to the naked eye,” Mia Fineman, an assistant curator of photography at the Met, said in an interview with PBS.
In most cases, manipulation was used to make the image “look the way it felt” rather than to deceive. Yet over time, these techniques began to be used to tell a different story than what really happened.
A famous example is a cut-out of Abraham Lincoln’s face from an 1860 photograph of him pasted onto the body of Vice President John Calhoun in an 1852 engraving. The composite image, which portrayed the 16th president wearing a robe in a near-Napoleonic pose, circulated during a wave of heroic-style pictures of the president after he was assassinated. For a century, no one noticed the image was fake.
Photoshop computer software debuted in 1987 and ushered in a new era of deceptive image manipulation. With a computer and the right software, anyone could change images. Advertisers, publishers and propaganda machines churned out so many digitally altered images that the term “photoshop” came to refer to any photo manipulation.
Enter deepfakes, which have become more difficult to decipher over a handful of years. “Presently, there are slight visual aspects that are off if you look closer, anything from the ears or eyes not matching to fuzzy borders of the face or too smooth skin to lighting and shadows,” Peter Singer, cybersecurity and defense strategist at the New America think tank, stated to CNBC.
But the “tells” are becoming harder to find as the technology advances, Mr. Singer said.
Think of generative AI photos and videos. What started out as primitive media that most people could immediately tell was not real has now advanced to a much more lifelike state. AI videos regularly go viral, with those sharing them on platforms like Facebook, TikTok and X not realizing they were fake.
One example is the “AI homeless man prank.” According to The Hill, “An AI-driven TikTok trend is resulting in 911 calls by panicked people who think a man has broken into their homes. The prank uses artificial intelligence to create a picture or video of a ‘homeless man’ entering a person’s home, going through their fridge, or lying in their bed. The prankster sends the fake video to a loved one, who thinks the convincing images are real. Police departments in at least four states have received calls for reported home intrusions only to find out the ‘intruder’ was an AI-generated person…”
While many may think they would not be fooled by a fake video or phone call, as technology advances, it will become harder to avoid deception.
Understanding the Danger
Every tool that mankind has developed can be used for good or evil. The internet has allowed for instant communication, online schooling and access to a plethora of do-it-yourself videos. Yet it also allows for the intentional spread of misinformation, the hacking and stealing of personal information, and for pushing extremist ideals abroad.
Similarly, some deepfake videos are harmless. They can be used for creativity, humor and satire, and if they are clearly labeled as AI up front, do not mislead people. But more often than not, they are used as “a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections,” an MIT technology report stated.
This is contributing to a larger trend of distrust among Americans toward what they see and hear. According to Gallup, “Americans’ confidence in the mass media has edged down to a new low, with just 28% expressing a ‘great deal’ or ‘fair amount’ of trust in newspapers, television and radio to report the news fully, accurately and fairly. This is down from 31% last year and 40% five years ago.” Deepfakes only fuel this perception.
While some amount of skepticism can be healthy, the problem is that many lack the mistrust needed to see through deception. Even when viewers know that videos are altered, the deception often still works. People are inclined to believe what they want to believe.
The research organization RAND Corporation reported: “‘Deepfakes play to our weaknesses,’ explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of ‘Truth Decay,’ a 2018 RAND report about the diminishing role of facts and data in public discourse. When we see a doctored video that looks utterly real, she says, ‘it’s really hard for our brains to disentangle whether that’s true or false.’ And the internet being what it is, there are any number of online scammers, partisan zealots, state-sponsored hackers and other bad actors eager to take advantage of that fact.”
People who cannot trust what they see are faced with three options: Accept the deception, try to ignore it, or spend time analyzing news from multiple sources to discern the truth. Daunted by the effort required to find truth, many unwittingly give in to deception.
Thousands of years before the discovery from these polls, a seldom-quoted verse in the Bible sums up this human tendency: “The heart is deceitful above all things, and desperately wicked: who can know it?” (Jer. 17:9).
Lies and deception have been a way of life for mankind for millennia, individually and on a national, corporate and political scale. This verse also shows that deceit is deeply rooted within a person’s heart. It is not simply an external problem—it lies at the core of human nature. People are naturally susceptible to deceit. Each individual must fight this tendency in order to find the truth.
With the advance of deepfake technology in a world where deceit is already so rampant, a time has come where facts can be completely distorted and “truth is fallen in the street, and equity [meaning straightforwardness, integrity, truth, or right] cannot enter. Yes, truth fails” (Isa. 59:14-15).
How many times have you found yourself at a loss to locate any source of truth? Looking at the media landscape, it becomes easy to say: “Truth fails.”
But there is a place you can turn to find truth. God’s Word—the Bible—states that it is truth (John 17:17 and II Tim. 2:15). God says that He cannot lie (Titus 1:2) and that His words will not pass away (Matt. 24:35). In a turbulent time when you are not sure who to believe, God’s Word provides comfort and stability on which you can rely.
You do not have to remain unsure whether to take Scripture at face value—you can actually prove it. Our booklet Bible Authority...Can It Be Proven? shows that you can determine, beyond all doubt, that the Bible is truth.
We at The Real Truth are here to help. This magazine uses God’s Word as the bedrock foundation from which to view and understand world events, bringing you the truth hidden beneath the deception that is so common today.
For more, read our articles “Weathering the Misinformation Age” and “‘Should I Be Worried About AI?’”
This article contains information from The Associated Press.