Is banking ready to trust artificial intelligence (AI) to teach?
Supplementing learning and development programs for financial institutions with trained AI is about much more than latching on to a fast-advancing megatrend, argues at least one proponent of the technology in a presentation for ProSight. While he and others caution banking leaders to keep human handlers close by, targeted-use AI creates operational efficiencies that scale continuing education and leave plenty of headroom for staff to advance their skills.
Notably, AI for creating and updating coursework and corresponding materials requires personnel, both for training the AI and to step in for human-in-the-loop assistance. That means AI works within, not outside, the high standards and objectives of risk management and compliance frameworks.
Such features are non-negotiable. And it’s the ability to customize AI for specific banking tasks, alongside a need for flexible, responsive learning and development programs reflective of today’s fast-changing markets, complex risk dynamics, and a shifting regulatory environment that position AI as a tool worth the time for compliance, risk, and governance leaders, so says Steven Ramirez, CEO of financial services consultancy Beyond the Arc.
Ramirez urges these compliance, risk management, and governance professionals to assume oversight of data input to train bank-specific AI models. The same teams might also take command of the quality control behind broad-based large language models (LLMs) used in the bank from mainstream technology companies, and likely to provide supplemental information. Hybrid models are common because on top of customization, quality LLMs can keep pace with the cadence of market news updates; final rulemaking nuances; or alerts affecting compliance, risk, fraud, and other business lines.
In addition to learning and development, AI has a role in document processing and parsing legal text, according to Ramirez. And by leveraging AI for such tasks, bank teams may reallocate human time and advanced skills to analytical, strategic, and other higher-level contributions. Ideally, the responsibility shift reduces burnout, rewards longevity, and contributes to building repeatable and scalable responses that reduce errors. Skills baked into workflows help create durable systems and review processes that reinforce the intention of frameworks. In other words, compliance, risk, and governance staff remain on the roster; they just move up in the batting order.
Ramirez led this discussion as part of a ProSight (formerly BAI) Learning & Development webinar, AI Course Development for Compliance: Accuracy and the ‘Human in the Loop.’ The presentation is available for replay on demand.
As the program title stresses, Ramirez dedicated time to the ongoing interaction between humans and AI tools, both to train AI on an ongoing basis and to detail personnel roles that encourage staff curiosity about AI and growing in step with technology. This includes complex issue escalations the AI might flag for earlier intervention known as human-in-the-loop. There is also a role for human-on-the-loop, a marginally less hands-on position at the ready for anomalies, and human-out-of-the-loop, such as downstream process audits.
ProSight editors recommend a rewatch of the program posted by our Learning & Development partners. Below, we highlight the portion of the Ramirez discussion that addressed tips on sourcing quality training inputs and protective features to consider.
Ensure updated and well-sourced AI training and implementation
- Teams can advocate for structured instructional design, validated scenarios, and SME review of AI training
- AI trainers can build in model behavior that compliance and risk management historically used, such as review cycles, documentation, and escalating questions to human oversight
- Training approaches can emphasize building context, maintaining logs, and verifying sources
- Leaders can encourage ongoing AI training education, prompt management, and transparency
- Training requirements can include bank-specific and industry-focused sources such as industry bulletins, legal memos, approved archived compliance guidance, and prior training manuals
Regulatory and legal frameworks
- When training AI, data inputs must emphasize state-level regulatory differences, status of federal proposed legislation, approvals, and final rulemaking, in addition to other policy differences, including executive orders and legal challenges
- Commitment to AI should adhere to updating policies in models that are aligned with evolving regulations, including when an institution enforces policy that exceeds supervisory requirements as a best practice
- Teams should update AI compliance in line with privacy laws like GLBA and consumer protection laws like FCRA
Risk, accuracy, and trust
- AI training should address risks like data bias and human bias, as well as hallucinations
- Training should avoid outdated data and aim for mitigation that involves diverse data collection
- Objectives can aim to build trust and mitigate risks through structured review, validation, explainability, and logging AI training progress
By: Rachel Koning Beals