For most banks, a big part of the AI conversation over the past few years has centered on generative tools—systems that create text, code, or summaries when prompted. But there’s no denying the buzz around agentic AI—designed not just to generate outputs, but to plan, act, and complete tasks. Several institutions are already experimenting with agents, and a number are using them in production. Heading into 2026, proponents believe agentic—not gen—AI carries the bigger operational implications.
The distinction matters.
Generative AI is reactive. It “triggers only when prompted,” according to a ProSight explainer, and depends heavily on human inputs and oversight. Agentic AI, by contrast, “plans, performs, and decides how to execute tasks” and can operate autonomously or with minimal supervision.
Adoption is already underway. A Google Cloud study cited by ProSight found that 53% of financial institutions are using AI agents in production, and 49% expect to allocate at least half of future AI budgets to agentic AI.
Banks are attracted to the ability of agents to run end-to-end processes—what some describe as “digital employees” or agent “squads.” Use cases now extend well beyond experimentation, from data management and fraud detection to customer service handoffs where AI captures notes live and appends them directly to the customer record.
As Driss Temsamani, head of digital at Citi, writes in his recent book “The Agentic Bank”—as quoted in The Financial Brand—“Agentic banking is not a question of if, but when.”
Agentic AI doesn’t just speed up tasks—it changes how work is structured. In Temsamani’s framing, human roles shift from execution to orchestration: “The cadence of decision-making has shifted from episodic to continuous, from reactive to anticipatory.”
One implication for risk and finance leaders: stress testing becomes continuous, not periodic. “Stress testing ceases to be a quarterly ritual; it becomes a live reflex,” Temsamani writes.
Agentic AI raises governance stakes. Temsamani is blunt: “In finance, trust depends on clarity.” If institutions can’t explain what an agent did and why, “black boxes are unacceptable.”
He warns that design quality matters: “A well-designed agent acts like a trusted deputy. A poorly defined one behaves like an intern with a master key.”
The takeaway for banks is not to chase novelty, but to pair agentic capability with transparency, explainability, and clear limits. As Temsamani puts it, this is “not automation for its own sake,” but “intelligence designed for collaboration, a co-pilot that knows when to lead and when to yield.”
For banks thinking about how to operationalize AI, ProSight’s AI Use Case Template offers a structured, risk-based framework for documenting assumptions, stakeholders, risks, vendor diligence, and success metrics before implementation.