Skip to main content

Fraud Model Validation Gets Tougher in the AI Era

Fraud model validation was never easy. But Chandrakant Maheshwari, lead model validator at Flagstar Bank, told ProSight that AI is making the job even more demanding—not just because banks are using new tools, but because fraudsters are too. 

Recently, ProSight spoke with Maheshwari about fraud trends, validation standards, AI adoption, and the growing complications around model oversight. His core message: validation has to get more rigorous as both fraud patterns and model architectures become more complex. 
 
Some key points: 

The hardest fraud is the transaction that looks legitimate until it’s too late. Maheshwari said the toughest cases today are account takeover, authorized push payment fraud, first-party fraud, and synthetic identity fraud. Why? They often resemble legitimate behavior, and “fraud risk manifests in minutes to days.” As he put it, “By the time a pattern is confirmed, the loss has already occurred.” 

Fraud and AML models are built for different jobs. One of Maheshwari’s sharper distinctions is that “money launderers are internal customers,” while fraudsters are usually outside actors targeting the bank or its customers. That difference shapes everything from model architecture to validation. AML monitoring is long-horizon and behavioral; fraud monitoring is fast and transactional. It also affects testing: “Fraud models can be back-tested against confirmed loss events. AML models cannot.” 

AI is helping banks—but it expands the governance burden. Maheshwari said machine learning has been part of fraud detection for years. What is newer is the use of generative AI to draft suspicious activity report narratives, summarize alert context, and support analyst decisions. Agentic AI goes further by retrieving data, cross-referencing typologies, and generating recommendations. “The productivity gains are real—but so are the governance risks,” he said. 

Validation has to start with the risk assessment. Maheshwari said every model assumption should trace back to the institution’s documented fraud risk profile. From there, validators should test conceptual soundness, data quality, and ongoing performance. For fraud, the training data itself deserves scrutiny because confirmed fraud cases are limited and may already be outdated. “Threshold validation, segment-level performance analysis, and fairness audits across customer demographics are not optional,” he said. “They are the baseline.” 

Banks should not think in terms of buy or build. Maheshwari’s recommendation is a hybrid model strategy. Vendor models offer breadth and visibility into attack patterns a single bank may never see on its own. Internal models add precision tied to the institution’s own customers, products, and risk profile. But he added an important warning: both layers must be validated, and so must the interaction between them. 

Going forward, the trend he is watching most closely in 2026 is fairness and bias auditing moving to the front line, alongside closer integration of fraud risk assessment into model governance. 

Related Articles

Banks are finding it harder to treat hurricanes, floods, wildfires, and other extreme weather events as distant environmental problems. The…

The deposit picture is getting a little better, but nobody should mistake that for a boom. In ProSight’s latest “State…

AI is changing cyber fraud in ways that go beyond better phishing emails or more convincing deepfakes. What is different…

Join Us in Strengthening and Advancing the Industry

We’re helping financial professionals build a stronger future and act with confidence.

Want to come along?

Connect with UsBecome a Member

Smiling man with gray hair and beard wearing a suit and glasses sits at a desk in a modern office with glass walls.