Artificial intelligence has unleashed a new wave of threats for enterprise cybersecurity teams. Cyber adversaries, including nation-state actors, are now leveraging AI to generate deepfakes and synthetic identities, bypassing defenses in ways that were unimaginable just a few years ago.
According to Keepnet Labs’ 2026 deepfake statistics analysis, deepfake fraud attempts have increased by over 2,137% in the past three years. Between January and September 2025, AI-driven deepfakes caused more than $3 billion in losses in the U.S. alone. Deloitte’s Center for Financial Services projects that fraud losses facilitated by generative AI will climb from $12.3 billion in 2023 to $40 billion by 2027, growing at a compound annual rate of 32%.
The old playbook for fraud detection, rule-based systems, static verification checks, manual document review, was built for a world where fraud was slower, less scalable, and less convincing. That world is gone. This post examines the three categories of AI-powered fraud that enterprises face today, why traditional defenses fail against them, and what the most effective countermeasures look like.
The threat landscape in 2026
1. Deepfakes
Deepfakes exploit a fundamental human vulnerability: we trust what we see and hear. Generative adversarial networks (GANs), diffusion models, and transformer architectures can now produce synthetic video, audio, and images that are indistinguishable from reality to untrained observers, and increasingly difficult even for trained ones.
The data is stark. Deepfake files grew from roughly 500,000 in 2023 to a projected 8 million in 2025, a 1,500% increase.
Deepfake audio is used to impersonate executives on phone calls and authorize fraudulent transfers (a tactic that has been documented since at least 2019, when a UK energy firm was defrauded of â¬220,000 via a cloned CEO voice). Deepfake video is used to bypass identity verification liveness checks during account onboarding. Deepfake documents, generated entirely by AI with realistic formatting, logos, and signatures, are used to pass KYC and credit checks. A 2025 survey by the ACFE found that generative AI has enabled the creation of financial documents (pay stubs, bank statements, invoices) that are entirely synthetic, with no original source file or trail.
And the threat is not limited to financial fraud. The U.S. Department of Justice prosecuted a scheme in which North Korean operatives used face-swapping and voice-cloning tools to pass remote job interviews, infiltrating American companies as IT workers and siphoning salaries and corporate data abroad. Experian’s 2026 Future of Fraud Forecast warns that deepfake job candidates will be a top threat this year, as GenAI tools generate hyper-tailored resumes and candidates capable of passing interviews in real time.
2. Synthetic identities
Synthetic identity fraud is one of the most persistent and difficult-to-detect fraud types because it doesn’t start with an obvious compromise. Instead, attackers blend real personal data (often belonging to children, the elderly, or deceased individuals) with fabricated attributes to create identities that can pass standard Know Your Customer checks.
These identities behave like legitimate customers for months or even years, quietly building credit history, accumulating exposure, and then executing “bust-out” schemes where they max out credit lines and vanish. BIIA’s 2026 synthetic identity fraud analysis estimates that up to 80% of all new account fraud is driven by synthetic identities, with U.S. economic losses projected to reach $23 billion by 2030. In the near term, U.S. lenders face over $3.3 billion in exposure tied to synthetic identities on new accounts.
North America experienced a 311% increase in synthetic identity document fraud between Q1 2024 and Q1 2025. AI tools have supercharged production: where creating a convincing synthetic identity once required considerable manual effort, generative models can now produce them at scale, complete with AI-generated ID documents, deepfake selfies for liveness checks, and fabricated financial records.
TELUS Digital’s trust and safety analysis notes that what makes synthetic identity fraud particularly destabilizing is its persistence. Once admitted through high-volume, high-trust workflows like onboarding and credit origination, synthetic identities compound financial exposure before triggering suspicion.
3. AI-powered bots
AI hasn’t just created new fraud categories. It has fundamentally upgraded old ones. Automated bot networks now power credential stuffing, fake account creation, and large-scale account takeover (ATO) with a level of sophistication that renders many traditional defenses useless.
Early bots were rigid, rule-based, and relatively easy to block with CAPTCHAs and IP blocklists. Today’s AI-powered bots mimic human behavior convincingly: scrolling, clicking, moving cursors in natural patterns, and even solving CAPTCHA challenges. They can distribute attacks across thousands of proxy IPs, spoof device fingerprints, and rotate identities faster than velocity-based rules can catch them.
Kasada’s Q2 2025 bot attack trends report documented waves of automated login attempts across banks, retailers, airlines, and ticketing platforms, sometimes thousands per minute, powered by AI-optimized scripts. The consumer impact is visible everywhere: bots scoop up concert tickets, GPU inventory, and limited-edition sneakers within seconds of release, reselling them at inflated prices through fake accounts and proxy networks.
The common denominator across all three categories is the same: speed, scale, and adaptability. AI has compressed the time and cost required to execute fraud at every stage of the attack lifecycle, from identity fabrication to credential harvesting to account exploitation. Static defenses, whether rule-based fraud filters, simple velocity checks, or single-factor verification, were designed for a slower, less adaptive adversary. They are structurally overmatched.
Why traditional defenses struggle
The fundamental problem with static fraud controls is that they evaluate each interaction in isolation, against fixed rules, without adapting to context.
A rule-based system might flag a transaction over $10,000, or block a login after five failed attempts, or require the same MFA challenge for every user regardless of risk. These controls catch known, predictable patterns. But AI-powered fraud doesn’t follow predictable patterns. Deepfakes pass liveness checks because they’re visually indistinguishable from real faces. Synthetic identities pass KYC because they’re built from real data fragments that match against credit bureau records. Bots pass behavioral checks because they’re trained to mimic human interaction patterns.
Peer-reviewed research published in the Journal of Financial Security (March 2025) found that traditional rule-based fraud detection systems averaged only 37.8% accuracy in real-world deployments, compared to 87-96.8% for AI-based models. The accuracy gap isn’t incremental. It’s structural. Rule-based systems can only catch what they’re programmed to look for. AI-based systems can learn what fraud looks like across millions of data points and adapt as patterns shift.
What effective AI fraud detection includes in 2026
The most effective fraud programs share a common architecture: they layer multiple detection signals, evaluate risk continuously rather than at a single checkpoint, and use machine learning to adapt as attack patterns evolve. Here’s what that looks like in practice.
Adaptive, risk-based authentication
Instead of applying the same authentication challenge to every user, adaptive systems evaluate contextual signals (device posture, location, network reputation, behavioral patterns, transaction sensitivity) and calibrate the response in real time. A returning user on a recognized device with consistent behavioral patterns sails through. A login from an unfamiliar device, unusual geography, or emulated browser triggers step-up verification or blocks access entirely.
The strength of this approach is signal correlation. No single signal is decisive, but the combination of device intelligence, behavioral biometrics, geolocation, and session metadata creates a multi-dimensional risk profile that is exponentially harder for fraudsters to spoof than any individual factor.
AI-powered document and biometric verification
Modern identity verification systems use machine learning to analyze thousands of security features per document, detecting AI-generated forgeries that human reviewers miss. AllAboutAI’s fraud detection research reports that AI models achieve 92-98% detection accuracy, compared to human reviewers who correctly identify high-quality deepfakes only 24.5% of the time. Liveness detection combined with AI improves accuracy to 99.1% on real IDs.
For biometric verification, presentation attack detection (PAD) systems analyze depth, texture, and motion to distinguish live faces from printed photos, video replays, and injection attacks. These systems are essential as deepfakes become more convincing; passive liveness checks alone are no longer sufficient.
Behavioral analytics and continuous monitoring
Rather than verifying identity only at the login gate, the strongest systems monitor behavior throughout the entire session. Typing cadence, navigation patterns, mouse movements, and transaction velocity all contribute to a continuous trust score. If behavior deviates from the user’s established baseline mid-session (for example, if a bot takes over after a human authenticates), the system can trigger re-verification or terminate the session.
AI-powered fraud systems prevented an estimated $25.5 billion in global fraud losses in 2025, delivering 90-98% accuracy across major institutions. Banks deploying these systems report 400-580% ROI within 8-24 months.
Orchestration: connecting the signals
The individual components above (adaptive auth, document verification, biometric checks, behavioral analytics) are each necessary but insufficient alone. What ties them together is orchestration: a centralized platform that ingests signals from every source, applies consistent policy logic, and routes each interaction through the appropriate verification pathway based on assessed risk.
Orchestration is what allows a financial institution to require minimal friction for a low-risk balance check, trigger document re-verification for a high-risk credit application, and block a deepfake-powered account takeover attempt, all within the same platform, governed by the same policy engine, logged in the same audit trail.
Veriff’s executives have described this shift as a move from discrete verification checkpoints to a continuous “trust infrastructure,” where identity verification, liveness detection, and authentication merge into one security layer that operates across the full customer lifecycle rather than at a single point in time.