When Money Moves Itself: Why Agent‑Readable Banks Still Need Human Guardians

AI agents are already reading invoices, routing approvals, and moving money in the background. Nikolay Denisenko, co-founder and CTO of Brighty, on how agent-native banking infrastructure works — and why auditability, not autonomy, is the harder design problem.

Share

A few years ago, “going digital” in banking meant a cleaner mobile app and fewer branches. Now, a growing number of finance teams are discovering a different future: they barely log in at all. Instead, AI agents sit between them and their banks—reading invoices, moving money, routing approvals, and generating reports in the background.

When Money Moves Itself: Why Agent‑Readable Banks Still Need Human GuardiansFrom that vantage point, traditional interfaces start to look slow and strangely manual. “Operating a bank account through an agent is an order of magnitude faster and more convenient than any mobile app,” says Nikolay Denisenko, co-founder and CTO of Brighty, a European digital finance platform that recently launched AI‑operable banking infrastructure for corporate clients. The catch is that when money moves itself, new questions emerge: how much should be delegated, who is accountable when things go wrong, and which banks even show up in an AI‑mediated world.

Industry forecasts suggest this is not a thought experiment. Agentic AI is moving from proofs of concept to production in financial services, with research indicating that more than 40 per cent of finance teams are on track to use AI agents by 2026, driven by use cases in compliance, onboarding, fraud detection, and back‑office automation. Yet most firms are still feeling their way through the governance and operating‑model implications. Denisenko’s answers offer a window into how one agent‑native banking platform is thinking about human control, auditability, and the race to become “readable” by machines.

How much can we safely hand over?

Ask Denisenko where the boundary lies between human and agent today, and he starts with a simple litmus test: if a company employs someone to receive invoices from a director and manually process payments, that workflow is already ready for delegation.

“Any company that today employs someone to receive invoices from a director and manually process payments can—and should—use an agent for that. That’s the floor,” he argues. The ceiling, in his view, is full automation of invoice issuance, approval routing, and the surrounding back‑office bureaucracy.

ALSO READ: Inside Corlytics’ Approach to Responsible RegTech

This aligns with external benchmarks suggesting that 40–60 per cent of transactional finance tasks—invoice capture, matching, routine payments, and reconciliations—are early candidates for agentic automation, provided strong controls are in place. But Denisenko is quick to draw a line between what agents can do technically and what they should be allowed to do organisationally.

“In practice, a significant portion of the transactional workflow is ready for delegation now. The critical constraint isn’t capability—it’s approval context. Agents must always surface the payment for user confirmation before execution,” he says. That framing echoes a broader industry consensus that CFOs and controllers will remain the final decision‑makers, even as they offload much of the mechanical work to AI.

Where Denisenko sees agents really earning their keep is not just in data entry, but in fund allocation and exception handling. “When an account is short, a good agent doesn’t just fail—it tells you where the money is and asks if you want to move it. That’s the real value. The human stays as the final trigger; the agent is the accelerator.”

It is not enough for an agent to know what it is doing; it must also know under whose authority and under which rules it is operating.

Making Finance Legible to Machines

If agents are the accelerators, data is the fuel—and not just any data. Agentic AI relies on richly structured, always‑fresh context to make safe decisions without constantly pinging humans for clarification. For Denisenko, the most important design lesson has been to treat approval context as a first‑class citizen.

“The single most important lesson is not losing approval context across the agent chain. Every decision node needs to carry the full provenance: who initiated, what policy applied, what account state existed at the moment of action,” he says. In other words, it is not enough for an agent to know what it is doing; it must also know under whose authority and under which rules it is operating.

For payments in particular, Brighty elevates FX rate provenance and counterparty validation to primary fields in the context, rather than background checks that happen elsewhere. “For payments specifically, FX rate provenance and counterparty validation need to be first‑class fields in the context—not inferred, not fetched on‑demand.” Agents that have to pause and ask basic questions about which vendor to pay or whether there are sufficient funds in the right currency will simply get turned off by finance teams.

That intuition tracks with what early adopters are discovering. Studies of agentic deployments in finance emphasise that pre‑resolved compliance flags, clear policy objects, and unified account state are what separate reliable agents from “expensive liabilities” that still require human babysitting. As Denisenko puts it: “Agents that have to stop and ask for clarification on counterparty identity or available balance are agents that will be abandoned. Structured, always‑fresh account state and pre‑resolved compliance flags are what separate reliable agents from expensive liabilities.”

ALSO READ: The Procedural Friction Eating Relationship Banking — and How AI Can End It

Auditability: Designing for the Bad Day

Autonomy is only attractive in finance if you can replay it. When an AI agent pays the wrong vendor, mishandles FX, or violates a policy, regulators, auditors, and internal risk teams will all ask the same question: what exactly happened and why?

“Immutable decision logs with full context snapshots at every step—not just what the agent did, but what it knew, what rules applied, and what the account state was,” is how Denisenko describes Brighty’s approach. That design principle is starting to show up in broader industry guidance as well, with frameworks from consultancies and regulators placing real‑time auditing, action logging, and deterministic replay near the top of their agentic AI checklists.

ALSO READ: Who Is Held Accountable When AI Agents Fail?

Denisenko is unsentimental about accountability. “When a bot is the final decision‑maker, the responsibility sits entirely with the entity that granted it that authority. That’s not a legal hedge—it’s the correct framing. It means the user or operator who configured the autonomous payment flow owns the outcome.” In that world, forensic traceability is not a compliance tax but a product feature.

“Forensic traceability is therefore a product feature you build for your operators, not just a compliance checkbox. Every agent action in Brighty’s infrastructure is auditable by design so that compliance teams can reconstruct the full decision tree, not just the transaction record.” It is a perspective that matches the direction large audit and assurance firms are moving in as they embed agentic capabilities into their own platforms while explicitly preserving the “fundamental role of human judgement, scepticism and insight.”

“Banks and fintechs that aren’t easily discoverable and operable by agents will simply not appear in the decision surface. That’s not a prediction, that’s already happening.”

From Interfaces to Decision Surfaces

If back‑office autonomy is one half of the story, the other is distribution: who shows up when an AI agent goes shopping for financial products? Denisenko does not hedge on the emerging narrative that banks and lenders which are not easily “readable” by AI agents will disappear from the checkout and credit decision surface.

“Banking interfaces as we know them are already becoming obsolete,” he says. “From direct experience: operating a bank account through an agent is an order of magnitude faster and more convenient than any mobile app.”

He argues that the industry is “at the genuine beginning of a new era—the speed at which agents handle routine financial operations and generate reporting has no prior comparison.” Over the next decade, he believes, competition in payments and lending will be won or lost not on glossy app design but on API quality, data structure, and agent‑readiness.

External analysis supports that direction of travel. Commentators on “agentic commerce” predict that AI‑driven checkouts and embedded credit flows will increasingly query a landscape of bank and lender APIs directly, ranking them based on price, risk, and constraints, rather than relying on human consumers to assemble options manually. “Banks and fintechs that aren’t easily discoverable and operable by agents will simply not appear in the decision surface,” Denisenko says. “That’s not a prediction, that’s already happening.”

For incumbents, that implies a strategic shift from designing for human eyeballs to designing for machine consumers: clear schemas, consistent metadata, programmatic constraints, and robust uptime visible to autonomous clients.

The New Operating Model: CISO as Chief Agent‑Watcher

Underneath the API layers and decision logs, agentic banking is also reshaping how organisations are structured. Many industry playbooks talk about cross‑functional “agent control rooms” where finance, risk, and technology teams monitor and intervene in autonomous workflows. At Brighty, Denisenko says, the heaviest new load has landed on the security function.

“AI fluency is a priority across the entire organisation—it’s part of our benefits package and something we actively push in day‑to‑day work,” he notes. But structurally, “the heaviest new load is falling on the CISO function. The CISO now runs internal training, reviews agent configurations, audits routing logic, and owns the oversight of what our agents are doing and why.”

ALSO READ: De-Risking the Crypto Portfolio: How AI Offers CFOs Control in a 24/7 Market

That security‑first posture mirrors broader concerns. Surveys find that while executives are excited about the ROI potential of AI agents, they are under‑investing in responsible AI and lack adequate guardrails, with a large majority having already experienced at least one AI‑related incident in production. Positioning the CISO as the de facto “chief agent‑watcher” is one way to close that gap.

“That security‑first approach to agentic infrastructure isn’t optional—when you’re connecting agents to live financial operations and user accounts, compliance and data exposure are non‑negotiable constraints that shape every architectural decision from the start,” Denisenko says. It is a view echoed by both regulators and advisory firms, who emphasise that scope, authority, auditability, and human override must be designed into agentic workflows from day one.

A New Social Contract for Autonomous Money

Taken together, Denisenko’s responses sketch out a new social contract for autonomous money movement. Agents can shoulder most of the transactional grunt work; humans remain the final trigger and the designers of policy; banks become infrastructure that must be legible not just to people but to increasingly opinionated machine clients.

The technology to automate large swathes of finance operations is maturing quickly. The harder work—now under way in places like Brighty and across incumbent banks—is about encoding context, traceability, and accountability in ways that keep regulators comfortable and customers confident. As interfaces recede and agents take the foreground, the winners in this next era of banking may not be those with the flashiest apps, but those whose systems are easiest for both auditors and autonomous agents to read.

Anushka Pandit
Anushka Pandit
Anushka is a Principal Correspondent at AI and Data Insider, with a knack for studying what's impacting the world and presenting it in the most compelling packaging to the audience. She merges her background in Computer Science with her expertise in media communications to shape tech journalism of contemporary times.

Related

spot_img

Unpack More