When financial institutions and, really, almost anyone referenced “artificial intelligence” from 2023 to 2025, they likely meant “generative,” or “gen,” AI. The public unveiling of ChatGPT and its gen AI content-creating capabilities in November 2022 launched an industry race for productivity, utility, and efficiency that has consumed banks ever since.
But AI wasn’t new to financial services or, indeed, to industry generally. Pre-2022, the concept of “AI” stretched as far back as the 1930s, referencing machines that could perform tasks usually associated with intelligent beings. In the 1980s, financial institutions began using AI varieties including rules-based systems and statistical arbitrage engines. In the 2010s, machine learning—a form of AI using patterns in data to make predictions—broke through in areas such as predictive risk modeling and quantitative trading.
Over the years, AI has meant many things to many people—depending on its application. And for clarity, AI is not mechanization—the latter typically involves replacing human physical labor with machines.
In financial services in 2026, “agentic” AI is increasingly where it’s at. While financial institutions continue to mine value from significant investments in gen AI, the technology’s place in the banking zeitgeist, one might say, is soooo last year.
A study from Google Cloud conducted in the second quarter of 2025 found that more than half of financial institutions (53%) are already using AI agents in production. Just under half (49%) said their institutions will allocate 50% or more of their future AI budget to developing agentic AI.
Attracted by agentic AI’s ability to initiate and complete tasks on its own, banks are building agents to run processes throughout the enterprise to improve efficiency and address strategic priorities. What’s emerging are “digital employees” and “squads” responsible for everything from data management to fraud detection.
Agentic AI and generative AI offer fundamentally different capabilities; gen AI relies on large language models (LLMs), while agentic AI leans heavily—but is not strictly dependent—on them. Agentic AI plans, performs, and decides how to execute tasks using reasoning and understanding derived from LLMs. It can act autonomously or with minimal human oversight—think of it as a doer—breaking down goals into steps and self-correcting with user and system input.
Gen AI, meanwhile, triggers only when prompted. It creates text, video, pictures, code, etc., using patterns identified from large samples of external content. It depends more heavily on human inputs and/or oversight—more of a passive creator. Agentic AI might incorporate gen AI prompting into its task sequences, using the outputs to take its next action.
Given agentic AI’s versatility, banks are using it for hyper-personalized financial guidance, just-in-time service marketing, and tailored advice. In contact centers, virtual assistants and chatbots are handling complex customer interactions and routing challenging calls. When customers are transferred among agents, AI can capture notes live and append them directly to the customer’s record. In this way, the next agent/channel can see the information to prepare.
Increasingly institutions are using it to monitor customer behavior, identify sophisticated fraud, and flag potential threats. More research from Google Cloud shows that AI has already demonstrated the ability to detect two to four times more confirmed suspicious activities.