AI is changing cyber fraud in ways that go beyond better phishing emails or more convincing deepfakes. What is different now is the speed, scale, and precision that the technology brings to both attackers and defenders.
Lisa Matthews, an independent AI and cybersecurity consultant who recently served as Ally Financial’s senior director of cybersecurity compliance, told ProSight that banks are facing a threat environment where AI is lowering the cost of attack even as institutions use the same technology to strengthen defenses. Here are some of the practical implications for banks:
Synthetic identity fraud stands out for a reason. Matthews called it “the largest source of financial crime losses today” and said it is being “turbo-charged by frontier models that create convincing personas at an unprecedented scale.” More broadly, she said AI has “fundamentally altered the economics and execution of cybercrime” by collapsing development timelines, lowering skill barriers, and enabling far greater scale.
The bigger shift is how fast and cheaply attacks can now scale. Matthews said AI can generate highly personalized phishing messages, mimic executive tone in business email compromise (BEC) schemes, clone voices, and produce deepfake videos in real time. The bigger shift, she argued, is “time and cost compression.” Attackers can now run “hyper-targeted campaigns globally,” create synthetic identities at scale, and use AI-driven conversational agents operating 24/7. Her bottom line: “machine-speed offense increasingly challenges machine-speed defense.”
Banks are using AI to move from detection to prevention. Matthews said machine-learning models already support near real-time transaction monitoring using behavioral biometrics, device fingerprinting, network link analysis, and anomaly detection. That means suspicious payments can be identified before funds leave the institution. She also pointed to graph analytics and document and facial verification as tools for catching synthetic identity fraud and deepfake onboarding attempts.
Agentic AI brings speed—but also governance risk. On the upside, banks can use agentic models to monitor transactions continuously, adjust risk thresholds, trigger step-up authentication, and freeze or reroute suspicious payments in real time. On the downside, Matthews warned about “model drift, false-positive cascades, unintended negative customer interactions, and a risk of overreliance on automated actions without sufficient human oversight.” She also flagged adversarial and indirect prompt injection as underappreciated risks.
Training has to feel current, not annual. Matthews said effective fraud training should move beyond annual compliance modules and instead use realistic simulations, including deepfake examples, BEC exercises, and social engineering drills. For customers, she said the goal is not fear-based messaging but helping them build “realistic and durable skepticism,” while reinforcing pause-and-confirm behaviors.
Her broader message is straightforward: banks are entering “a more volatile and machine-speed fraud environment.” The response, she said, must be “equally adaptive” and grounded in intelligence-converged defenses.