Rising AI Cybersecurity Threats: Surge in Deepfake Scams and Digital Risks

Experts warn that criminals are increasingly exploiting AI to impersonate individuals, automate scams, and breach security systems in 2025.

Rising AI Cybersecurity Threats: Surge in Deepfake Scams and Digital Risks

Imagine picking up your phone only to hear the familiar voice of your boss or a close friend urgently requesting confidential information, only to realize it was never them at all. Instead, you were targeted by a deepfake—a sophisticated AI-powered scam exploiting advanced voice and video manipulation technology. These scenarios, once relegated to science fiction, have become disturbingly real, as confirmed by the 2025 AI Security Report released at the RSA Conference, a major event for cybersecurity experts, law enforcement, and private companies.

The report highlights an alarming trend: cybercriminals are now utilizing artificial intelligence to impersonate individuals, automate scams, and orchestrate attacks on an unprecedented scale. Techniques range from hijacking AI accounts and manipulating AI models, to live deepfake video fraud, phishing campaigns, and poisoning trusted data sources.

One striking finding involves the information users themselves input into AI tools. Analysis reveals that one out of every 80 AI prompts contains high-risk data, and about one in 13 involves sensitive information—such as passwords, business strategies, or proprietary code—that could expose individuals or organizations to grave security breaches.

The threat has grown more sophisticated. In early 2024, for instance, a British engineering firm lost £20 million after attackers used deepfake video during a virtual meeting to convincingly imitate company executives. The criminals, aided by real-time video manipulation tools now found on criminal forums, mimicked faces and voices so well they persuaded an employee to wire funds.

AI-driven impersonation goes far beyond video calls. Tools like GoMailPro, sold openly on underground markets, leverage ChatGPT technology to create tailor-made phishing emails with flawless language and the right emotional triggers. Unlike the clumsy scams of the past, these emails are difficult to distinguish from legitimate correspondence. GoMailPro alone can generate thousands of unique messages, bypassing spam filters and making life even harder for defenders.

Another example, the X137 Telegram Console, uses Gemini AI to monitor and reply to chat messages automatically. It can convincingly play the role of customer support agents or known contacts, carrying on real conversations with multiple victims simultaneously—no human scammer required.

AI is powering large-scale sextortion scams too, with messages that reference fake compromising material and demand payment. Where once identical messages were mass-sent, AI now enables variations that feel personal and urgent, such as rephrasing threats to evade detection and increase psychological pressure on the victim.

Cybercriminals are also targeting the very AI accounts people use. Hackers employ malware and phishing attacks to steal logins and API keys for popular AI platforms, then sell these credentials in bulk on dark web markets. Some attackers deploy specialized tools to bypass two-factor authentication and other advanced security measures, granting them unauthorized access for further scams and malware generation.

Meanwhile, attackers on the dark web have devised methods to "jailbreak" AI models, manipulating them to answer restricted questions or perform illicit tasks. Some AI models can even be tricked into jailbreaking themselves via carefully crafted prompts, demonstrating intrinsic vulnerabilities in current systems.

AI is not just a tool for creating phishing emails; it's now being used to help code malware, build ransomware scripts, and orchestrate denial-of-service attacks. The ransomware group FunkSec reportedly leverages AI for around 20% of its operations, including developing tools that bring down websites and conducting automated victim communications via AI-powered chatbots.

Some cybercriminals exploit the AI hype itself, falsely marketing older hacking utilities as powered by advanced AI. Others, like developers of the DarkGPT chatbot, actually deliver, using large language models to comb through stolen databases for high-value targets.

Not all attacks require breaching AI security systems. Increasingly, adversaries engage in "AI poisoning"—feeding false or misleading information to AI models so that their outputs become biased, harmful, or simply inaccurate. A recent case saw over 100 tainted AI models uploaded to open-source repositories, while a Russian disinformation campaign published millions of fake articles meant to trick AI chatbots into parroting propaganda.

As AI-powered cybercrime escalates in realism, speed, and reach, experts stress individual vigilance. The following precautions are recommended:

  • Avoid entering sensitive information into public AI tools. Even seemingly harmless data can be logged and misused.
  • Install strong antivirus software capable of detecting AI-generated malware and phishing attempts.
  • Enable two-factor authentication (2FA) on all accounts, especially AI platforms.
  • Be cautious with unexpected video or voice messages. Always verify identity before acting, as deepfakes can be highly convincing.
  • Consider using a reputable personal data removal service to reduce your digital footprint and impede scammers gathering information for social engineering.
  • Monitor your financial accounts regularly for unusual or suspicious activity.
  • Utilize a secure password manager to create and store unique passwords for each account, minimizing the damage from credential leaks.
  • Keep all software up to date to patch vulnerabilities that AI-powered malware may seek to exploit.


Ultimately, as organized cybercrime syndicates employ AI to automate fraud and launch sophisticated, hard-to-detect scams, defending oneself requires a combination of technical tools and behavioral awareness. Experts caution that no single solution is foolproof—it takes ongoing vigilance, layered security measures, and healthy skepticism to keep pace with this rapidly changing threat landscape.

If you've encountered an AI scam or have concerns about new threats, sharing your experience can help others stay informed and better protected. As AI-enabled attacks become more widespread, staying educated and proactive remains the most effective line of defense.