FBI warns scammers are using AI voices to impersonate US officials


This recording was made using enhanced software.

Summary

AI impersonation

The FBI warns that scammers use AI-generated voice and text messages to impersonate senior U.S. officials. The scheme targets government personnel and their contacts.

Voice cloning surge

Advances in AI voice tech make it easy to create near-perfect imitations of real people using only seconds of recorded audio. The market is growing rapidly.

Security warning

Officials urge caution when receiving unexpected messages, especially from unfamiliar platforms. The FBI says impersonation scams may lead to account breaches or data theft.


Full story

The FBI issued a warning about an ongoing campaign using artificial intelligence (AI) to impersonate senior U.S. officials through voice and text messages. The scheme involves “smishing” and “vishing” attacks, text and voice-based phishing methods, designed to trick targets into giving up personal information or account access.

According to the FBI’s public service announcement, attackers are sending AI-generated voice messages claiming to come from high-ranking government figures. The goal is to build trust with the recipient before requesting a move to another messaging platform, where they could send malicious links or request sensitive data.

The warning comes as concerns over AI impersonation grow beyond cyberscams. In 2024, the FCC fined a political consultant $6 million for sending AI-generated robocalls that mimicked President Joe Biden’s voice ahead of the New Hampshire primary.

Who are the targets?

The FBI said the campaign primarily targets current and former U.S. federal or state government officials, as well as their contacts. Once attackers gain access to a victim’s information, they could use it to impersonate additional officials or acquaintances and expand their reach. The bureau warned that contact details obtained through these scams could be used to impersonate others and get money or personally identifiable information.

Why is this a growing threat?

AI-generated audio has advanced to the point where voice clones are often indistinguishable from real human speech. Malicious actors can create convincing voice messages that mimic public figures with only a few seconds of audio.

The tools used to build those voice clones are more available than ever. A report from venture capital firm Andreessen Horowitz said the global AI voice market reached $5.4 billion in 2024 and continues to grow, driven by consumer adoption and advances in language processing. By 2026, that market is projected to reach nearly $9 billion.

Those tools now power everything from customer service bots to hands-free driving assistants.

How can people protect themselves?

The FBI advised individuals to verify the identity of anyone requesting sensitive information, especially if the request comes via a new phone number or an unfamiliar platform. Officials recommend checking for misspellings in contact details, examining URLs for irregularities, and listening closely for unnatural voice patterns.

If in doubt, the FBI urged people to contact their organization’s security office or a local FBI field office. Victims must also file reports with the Internet Crime Complaint Center at www.ic3.gov.

“AI-generated content has advanced to the point that it is often difficult to identify,” the FBI said. “When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”

Emma Stoltzfus (Video Editor), Alex Delia (Deputy Managing Editor), and Ally Heath (Senior Digital Producer) contributed to this report.
Tags: , , , ,

Why this story matters

The widespread use of AI-generated voice and text messages to impersonate senior U.S. government officials poses new cybersecurity risks and highlights the increasing sophistication of social engineering attacks, as noted by FBI advisories and numerous reports.

AI impersonation

The ability to convincingly clone voices with AI enables malicious actors to deceive victims more effectively, making detection and prevention significantly more challenging for both individuals and organizations.

Targeting officials

Current and former senior U.S. government officials, along with their contacts, are being specifically targeted in this campaign, raising concerns about the security of sensitive information and the potential for further breaches affecting wider governmental networks.

Evolving social engineering

The adoption of advanced AI techniques in traditional phishing schemes like smishing and vishing demonstrates how social engineering attacks are evolving, necessitating increased public awareness and updated mitigation strategies, as emphasized by the FBI.

Get the big picture

Behind the numbers

Several articles highlight that the FBI advisory comes amid a reported 442% surge in the use of AI-based voice cloning for social engineering between early and late 2024, according to CrowdStrike. The FBI also notes that older adults lost nearly $5 billion to cybercrimes, demonstrating how widespread and costly such attacks can be.

Common ground

Across the spectrum, the articles agree that cybercriminals are exploiting AI-generated voice messages and texts, known as 'vishing' and 'smishing,' to impersonate senior U.S. officials. There is also consensus that the main aim is to deceive recipients into revealing sensitive information or account credentials, posing risks both to individuals and wider government networks.

Context corner

Phishing, smishing and vishing are not new threats, but advancements in AI-enabled voice and text generation have made these schemes harder to detect. Historically, impersonation tactics targeted companies and individuals using emails, but deepfake audio has recently been weaponized, complicating traditional verification and deepening trust vulnerabilities.

Bias comparison

  • Media outlets on the left frame the AI-driven impersonation of senior US officials as a pressing cybersecurity threat, using terms like “hackers,” “scams” and “malicious actors” to highlight deception and systemic risks, emphasizing the vulnerability of government integrity.
  • Not enough coverage from media outlets in the center to provide a bias comparison.
  • Media outlets on the right similarly employ “malicious,” but shift rhetorical tactics toward vigilance and personal responsibility, urging readers to be “vigilant” and follow FBI security advice, fostering a tone of caution intertwined with individual empowerment.

Media landscape

Click on bars to see headlines

50 total sources

Key points from the Center

  • The FBI issued a public service announcement on Thursday warning that since April 2025, cybercriminals have used AI-generated voice deepfakes to impersonate senior US officials and target current and former government personnel and their contacts.
  • These attacks follow prior FBI warnings and HHS alerts about increasingly sophisticated deepfakes used in voice phishing schemes designed to steal sensitive information and funds through social engineering.
  • Attackers send text and AI-generated voice messages with malicious links disguised as invitations to move conversations to other platforms, aiming to compromise accounts and access broader contact networks for further exploitation.
  • The FBI warned that messages appearing to come from high-ranking US officials should not be trusted without verification, as these breaches may be used to trick individuals into revealing information or transferring money.
  • This ongoing threat suggests cybercriminals may increasingly use AI voice impersonations for financial fraud and account compromise, posing risks to government officials and their associates.

Report an issue with this summary

Key points from the Right

  • The FBI warned that criminals are using AI-generated voice messages to impersonate senior U.S. Officials, aiming to access personal accounts of government officials and their associates.
  • Since April 2025, a phishing campaign has targeted senior U.S. Officials using text and AI-generated voice messages to gain trust and access to personal accounts.
  • The FBI advised individuals to verify the identity of message senders and to be cautious of hyperlinks that may lead to sites that steal login information.
  • The FBI emphasized that AI-generated content can be hard to detect and urged people to consult their security officials if they doubt the authenticity of communications.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™