AI Fraud Is Scaling Like A Startup—And Banks Are Struggling To Keep Up

In March 2025, a finance director at a multinational firm in Singapore joined what appeared to be a routine Zoom call with senior leadership, including the CFO and other executives. Everyone looked and sounded real. The finance director authorized a $499,000 wire transfer. The problem? The people and voices on that call were deepfakes. By the time the company discovered the fraud, the money was gone.

Fraud is no longer just a criminal activity. It has become an organized, scalable industry powered by artificial intelligence (AI). What used to require technical expertise, insider access, or coordinated human effort can now be executed with off-the-shelf tools, automation, and even subscription-based fraud kits.

Meanwhile, banks are still largely relying on rule-based systems, static identity checks, and compliance-heavy processes designed for a very different era. This has widened the gap between how fraud is created and how it is prevented.

Key Takeaways

  • AI has turned fraud into a scalable business model, enabling deepfakes, voice cloning, and automated scams to operate like startups.
  • Banks are falling behind, constrained by legacy systems, regulatory friction, and siloed data that limit real-time threat detection.
  • The gap is widening, as attackers innovate faster, forcing financial institutions to adopt AI-native, proactive defense strategies.

The Scale of the Problem

AI-driven scams are growing at unprecedented rates. Deepfake fraud attempts have increased more than 2,000% in recent years, with some attacks occurring every few minutes. 

In 2025, global losses from AI-enabled fraud hit approximately $21 billion. Similarly, Nasdaq Verafin estimated that total bank fraud and scams reached $579 billion worldwide, highlighting the growing scale of the threat.

In addition, the cost per attack is also rising as a single successful voice fraud incident costs enterprises an average of $680,000.

More importantly, access to these tools is no longer limited. It is cheap and easy to clone a voice using three seconds of publicly available audio, with an 85% accuracy match to the original speaker. Bad actors can also launch phishing campaigns using AI-written scripts at scale.

This has led to what many experts now describe as "fraud-as-a-service." Criminal groups build tools once and sell or rent them to others, turning fraud into a repeatable business model.

How AI Frauds Work

Modern fraud is defined by speed, scale, and personalization. Understanding the mechanics of AI fraud helps explain why traditional defenses consistently fall short. Here is how it works:

Target selection: Fraudsters identify individuals with financial influence, such as chief financial officers, finance directors, or treasury department personnel. They gather information from public sources such as LinkedIn pages, earnings call transcripts, conference videos, and social media posts.

Asset creation: Through such collected data, they create synthetic voices and deepfake video impersonations of known executives. The technology creates media that looks almost identical to authentic media.

Execution: Attackers contact the victim via email, phone calls, or video conferences, posing as someone the victim knows. Urgency is created through claims of sensitive acquisitions, regulatory obligations, or emergencies in the person’s family. This psychological pressure minimizes the victim’s instinct to authenticate the claim.

Collection: As soon as funds are transferred or login credentials are provided, they move the money swiftly between multiple bank accounts, often routed through cryptocurrency platforms or international wires, making retrieval difficult.

Why Banks Are Still Behind

Despite heavy investment in AI fraud detection, most financial institutions are fighting an asymmetric battle. While criminals operate without regulatory constraints, ethical frameworks, or governance obligations, banks do not have such liberty.

In addition, the CSI’s 2026 Banking Priorities Executive Report revealed that AI-enhanced social engineering attacks (including voice cloning and QR code phishing) jumped 16 percentage points to become the leading cybersecurity concern among financial institutions. Yet 85% of respondents also agree that institutions adopting AI will gain a significant competitive advantage, reflecting the tension between fear and necessity.

Most banks have tried to incorporate AI into the legacy, rule-based technology systems that are not designed to recognize mid-conversation and adaptive attacks. According to SAS experts, crime prevention techniques built on native AI platforms will outperform those built on existing platforms.

Moreover, third-party risks are on the rise. Third-party involvement in attacks doubled year-on-year to 30%. Open banking APIs and mobile wallet integration increase the attack surface at a faster rate than the security team can track.

Fraud signals are often siloed across departments, making it difficult to detect coordinated or cross-channel attacks.

What Needs to Change

To close the gap, banks need to move beyond incremental upgrades and adopt a different approach that requires action at multiple levels.

Establish verification protocols that use pre-agreed code words before any financial transaction is authorized. Treat any unsolicited request for urgent payment, regardless of the sender’s apparent identity, as high risk. Verify through a second, independent channel before acting.

Financial institutions should move from reactive detection to proactive, real-time behavioral analytics. They can achieve this by replacing rule-based systems with AI-native platforms that can identify anomalies in communication patterns, not just in transaction data. Furthermore, they can integrate fraud prevention, anti-money laundering functions, and cybersecurity into a unified risk framework.

Governance frameworks for AI use in banking remain critically underdeveloped relative to the pace of criminal adoption. It has become critical to establish clear, AI-specific regulatory oversight to eliminate any potential friction. 

Bottom Line

AI has turned fraud into a fast-moving, scalable industry, while many banks are still relying on systems built for a slower, more predictable threat landscape. Deepfakes, synthetic identities, and AI-driven social engineering are exposing the limits of traditional defenses and widening the gap between attackers and institutions.

Beyond incremental upgrades, banks must rethink how they detect, verify, and respond to threats in real time. In a world where fraud evolves like a startup, the institutions that survive will be those that can adapt just as quickly.

Benzinga Disclaimer: This article is from an unpaid external contributor. It does not represent Benzinga’s reporting and has not been edited for content or accuracy.