Skip to main content

Cyber Fraud and AI: How Banks Can Navigate a Tangled Minefield

The proliferation of AI and cyber risk is driving huge fraud losses at banks. At the same time, however, banks are increasingly deploying generative AI and agentic AI solutions to catch and prevent fraud.  

“The technologies reshaping the threat environment also represent incredibly powerful tools for banks. For example, AI-driven behavioral analytics can detect anomalies in transaction patterns, authentication sequences, and identity signals more precisely and rapidly than a human review today,” said Lisa Matthews, an independent AI and cybersecurity consultant who recently served as Ally Financial’s Senior Director of Cybersecurity Compliance. 

Certainly, though, banks are facing an uphill battle. The Deloitte Center for Financial Services predicts that AI fraud losses will skyrocket to $40 billion in the U.S. by 2027, up from $12.3 billion in 2023. What’s more, 60% of financial institutions and financial technology companies reported an increase in fraud last year (with more than one-third claiming direct fraud losses of more than $1 million), according to Alloy’s 2025 State of Fraud report. 

While there are many different types of complex, AI-driven fraud, including ransomware, deepfakes, account takeovers, prompt injection attacks and business email compromise (BEC) scams, one stands out as perhaps the biggest challenge for banks. “Synthetic identity fraud is the largest source of financial crime losses today and is turbo-charged by frontier models that create convincing personas at an unprecedented scale,” said Matthews, who previously led  the  Insider Threat Program at Wells Fargo. 

Chief risk officers participating in ProSight’s 2026 CRO Outlook Survey cited cyber and technology risk as their number one risk, followed closely by AI-related risks. ProSight recently had an in-depth conversation with Matthews—a former member of both the AI Steering Committee and the Insider Threat Working Group at the Financial Services Information Sharing and Analysis Center (FS-ISAC)—about the duality of AI and the interconnection between AI and cyber fraud. 

ProSightThe rise of AI has democratized cybercrime, opening it up to all types of bad actors. How has this technology changed the game for cybercriminals, and are there any emerging AI-driven scams that you find particularly concerning? 

Matthews: AI has fundamentally altered the economics and execution of cybercrime by collapsing development timelines, significantly lowering skill barriers, and enabling an unprecedented scale. What once required weeks of reconnaissance, scripting, and coordination can now be executed in hours.  

AI generates highly personalized phishing messages, mimicking executive tone in BEC schemes, and even cloning voices or producing deepfake videos in real time to bypass traditional verification controls. For example, banks have used brand indicators for message identification (BIMI) for several years to help customers feel secure when receiving emails. While BIMI still helps reduce basic domain spoofing when paired with strict DMARC enforcement, it does little to stop modern AI-driven phishing, lookalike domains, compromised legitimate accounts or multi-channel social engineering. It’s helpful, but no longer a complete control.  

The most concerning shift is not one single tactic, but the time and cost compression. Attackers can now run hyper-targeted campaigns globally, create synthetic identities at a broad scale, and sustain emotionally manipulative scams through AI-driven conversational agents operating 24/7.  

As a result, organizations face a new environment where machine-speed offense increasingly challenges machine-speed defense, fundamentally reshaping fraud risk and systemic exposure. 

ProSight: On the plus side, AI is also being used today for fraud mitigation. Can you describe the key ways that banks are now employing the technology to predict and prevent fraud? 

Matthews: AI offers banks the opportunity to shift from reactive detection to predictive prevention. Machine-learning models have powered near real-time transaction monitoring, analyzing behavioral biometrics, device fingerprinting, network link analysis, and anomaly detection. This means a bank can identify suspicious payments before funds leave the institution. AI also allows for advanced synthetic identity controls using graph analytics, with AI-built document and facial verification to detect identity stitching and deepfake onboarding attempts.  

Moreover, AI enhances a bank’s threat intelligence by ingesting open-source data, dark-web signals, consortium feeds, and internal fraud telemetry to identify emerging attack patterns earlier, with automated incident summarization accelerating analyst response and regulatory reporting. Banks are moving toward this type of intelligence convergence, fusing cyber threat intelligence, fraud analytics, AML signals, and customer behavioral data into more accurate, unified risk models. This allows these systems to correlate signals across traditional silos and proactively disrupt fraud campaigns at scale, as opposed to investigating each alert in isolation after a loss occurs. 

ProSight: One of the challenges that banks seem set to face in 2026 is the increased use of agentic AI—a  more autonomous version of the technology capable of decision-making. This will likely be deployed by both cybercriminals and banks. What do you see as the pros and cons of agentic AI from an online fraud perspective? 

Matthews: Agentic AI introduces both powerful defensive advantages as well as serious new risks from online fraud, because it moves systems from passive analysis to autonomous decision-making at machine speed. On the positive side, banks can deploy agentic models to continuously monitor transactions, dynamically adjust risk thresholds, orchestrate step-up authentication, and even freeze or reroute suspicious payments in real time by correlating cyber, fraud, AML and behavioral signals across channels, which can dramatically reduce response time  and corresponding fraud losses.  

However, this same autonomy creates real governance challenges: model drift, false-positive cascades, unintended negative customer interactions, and a risk of overreliance on automated actions without sufficient human oversight. More concerning is the evolution toward increasingly autonomous attack tooling that will continually test control boundaries and create more personalized phishing attempts based on real-time feedback of transaction patterns. For example, as banks deploy frontier model or LLM-based tools and agentic systems, adversarial prompt injections—where malicious inputs manipulate the model into bypassing controls or leaking sensitive data—are currently an underappreciated risk.  

Indirect prompt injection is also an unappreciated risk with limited detection capability. This is when an AI agent reads a fraudulent email or website and “takes orders” from that external data, rather than the bank’s internal instructions. The scalability of these tactics with coordinated fraud attempts, combined with minimal human intervention, accelerates the requirement to implement clearly defined and real-time model governance—as well as human override mechanisms.  

ProSight: Training seems like an absolute must in the battle against fraud. What advice do you have for banks that need to educate both their employees and their customers about cyber fraud, including AI-driven scams? 

Matthews: Since AI is dynamic, effective fraud training should move beyond annual compliance modules and become a continual, behavior-focused education tailored to today’s AI-driven threat landscape. Banks should consider using realistic simulations in these training scenarios, including deep-fake voice and image examples, BEC exercises, and dynamic social engineering drills. This training not only reinforces verification requirements and discipline but also emphasizes understanding clear escalation paths, reinforcing that urgency and authority are common manipulation tactics.  

For customers, education should be positive, direct, and repeated. Emerging scams, such as AI voice impersonation and synthetic identity fraud, can be explained to customers in plain language. The goal is not fear-based messaging, but helping customers build realistic and durable skepticism,  reinforcing pause-and-confirm behaviors.  

ProSight: As AI becomes more prevalent, do you think we’ll see any significant changes in cybercrime regulation this year? 

Matthews: As AI becomes more embedded in both fraud and financial services, we will hopefully see incremental but meaningful regulatory shifts this year. I anticipate a continued focus on AI-enabled impersonation, deepfake misuse, and enhanced liability for technology-assisted scams rather than sweeping standalone “AI crime” laws.  

In the U.S., this will likely take the form of stronger penalties for AI-assisted fraud, clearer disclosure requirements around synthetic media, and expanded enforcement authority. In Europe and the UK, there are already existing frameworks, like the EU AI Act, DORA, and MiCA.  These regulations, combined with evolving payment-scam reimbursement regimes, will increasingly be applied to AI-driven fraud scenarios through supervisory guidance and enforcement actions. This year, the EU AI Act will be in its high-stakes implementation phase, and banks are specifically classified as “high-risk” AI users under Annex III of that act.  

The more significant shift may not be new statutes, but a heightened expectation around model governance, third-party oversight, operational resilience, and accountability. This will effectively embed AI-related fraud risk into mainstream cyber, financial crime, and consumer protection regulation, rather than treating it as a separate category. 

ProSight: What’s on the cybercrime horizon? Do you expect to see a surge in online fraud in any specific areas, and are there any emerging threats (AI or otherwise) that banks need to keep a particularly close eye on in 2026? 

Matthews: Banks expect fraud pressure to intensify at the intersection of economic stress and rapidly maturing AI-enabled deception. Periods of rising unemployment, consumer debt strain, or market volatility historically correlate with increases in account takeover, synthetic identity fraud, first-party fraud, and mule activity.  AI dramatically amplifies each of these by lowering barrier to entry and ease to scale.  

We are likely to see growth in AI-powered impersonation scams (voice and video deepfakes targeting treasury and high-net-worth clients), automated synthetic identity rings exploiting digital onboarding, and machine-driven probing of bank chatbots and agentic systems to identify control gaps. At the same time, insider risk may rise, as financial pressure increases employee susceptibility to coercion or collusion.  

While not yet an immediate threat, the risk to current encryption standards [posed by quantum computing] like RSA and ECC, is real enough that NIST finalized post-quantum cryptography (PQC) standards in 2024. Banks with long-dated data are already vulnerable to “harvest now, decrypt later” (HNDL) attacks, and are now moving from the “inventory phase” to implementation phase of PQC-ready VPNs and Transport Layer Security (TLS) certificates to protect against HNDL attacks.  

The emerging threat is not a single tactic, but a convergence. Economic incentives drive more participants into fraud ecosystems. New autonomous tools enable adaptive, low-cost, high-scale attacks to create a more volatile and machine-speed fraud environment. Banks must counter with equally adaptive, intelligence-converged defenses. 

By: Robert Sales

Related Articles

Extreme weather is moving from a distant environmental concern to a direct financial risk for banks and other financial institutions,…

Even before the rise of AI, fraud model validation was an incredibly difficult task. But with the explosion of generative…

Branch leadership can think creatively even when it comes to one of the most serious issues facing banking today: minimizing…

Join Us in Strengthening and Advancing the Industry

We’re helping financial professionals build a stronger future and act with confidence.

Want to come along?

Connect with UsBecome a Member

Smiling man with gray hair and beard wearing a suit and glasses sits at a desk in a modern office with glass walls.