- Compliance & Regulation, Risk
Share
The federal banking agencies’ revised model risk management guidance looks, at first glance, like a simplification story. The new framework is shorter, more principles-based, and less prescriptive than the guidance many banks have been living under since 2011. But the practical effect may be more complicated: institutions now have more room to tailor model risk management to their own profile, while also making some fresh judgment calls about what belongs inside the framework and what does not.
A few practical points stand out:
The new guidance is lighter on prescription, but not on expectations. Legal analyses from Orrick and Troutman Pepper Locke both emphasize that the revised guidance is more risk-based and principles-driven, with less specificity about how banks should carry out their responsibilities. Orrick notes that the guidance “does not set forth enforceable standards or prescriptive requirements,” and Troutman points to the agencies’ statement that non-compliance with the guidance alone will not automatically trigger supervisory criticism. That said, weak model risk management can still lead to findings tied to unsafe or unsound practices or violations of law.
Smaller banks are less squarely in the crosshairs—but not off the hook. Orrick notes that the guidance is now framed as “most relevant to” organizations with more than $30 billion in total assets, a notable shift from the prior framework’s focus on institutions above $1 billion. For many smaller banks, that may ease some of the pressure to map every practice to the old framework. But both Orrick and Troutman make clear that institutions with significant model risk exposure—because of the prevalence, complexity, or nature of their model use—may still find the guidance highly relevant.
Materiality and proportionality now matter even more. One of the clearest shifts, highlighted by both Orrick and Troutman, is the move toward assessing overall model risk by considering inherent risk in the context of materiality. In practice, that means more rigorous controls for models that are highly important, complex, or assumption-driven—and lighter oversight for models that are less material. Orrick also notes that this could allow some banks to reconsider how broadly they apply full model risk management treatment across inventories built under the older framework.
The definition of “model” is narrower. Orrick says the revised definition now requires complexity and explicitly carves out simple arithmetic calculations, deterministic rule-based processes, and software without underlying statistical, economic, or financial theories. That could have real implications for inventories, policies, and validation cycles that were built around a broader interpretation.
Banks still own the risk on vendor models. Troutman underscores that using outside tools does not outsource model risk. Banks are still expected to understand vendor models to the extent possible, monitor performance, and validate customizations.
Generative and agentic AI are not the main story—but they are a possible gap. Orrick notes that the revised guidance expressly excludes generative and agentic AI from scope because the agencies view those technologies as “novel and rapidly evolving.” An article in Forbes takes that point further, arguing that the exclusion leaves the fastest-moving category of model deployment outside the framework examiners use today. Even if that is not the central thrust of the new guidance, it is a practical issue banks evaluating AI cannot ignore.
The takeaway: For smaller institutions especially, the revised guidance may create more room to match oversight to their specific model risk exposure rather than defaulting to a one-size-fits-all framework. For banks reassessing how they handle validation under that approach, the ProSight Model Validation Consortium is one available resource.
Join our community to unlock exclusive content, connect with industry experts, and gain access to valuable resources that will help you stay ahead.