Back to Posts
AI Agents
Post

When AI Agents Act on Behalf of Your Customers: The New Layer of Responsibility in Finance

Automation in banking and fintech is no longer limited to internal processes or bots executing repetitive tasks. Increasingly, customers will delegate decisions and transactions to artificial intelligence agents capable of comparing offers, activating services, or moving money between institutions — all while they are offline.

This scenario, which only a few years ago seemed like science fiction, is now an emerging reality. And with it comes a challenge that no institution can ignore: delegated responsibility. If an agent can execute end-to-end transactions on behalf of a customer, risk is no longer centered exclusively on human identity. The critical question now becomes: who is accountable when something goes wrong?

The shift from KYC (Know Your Customer) to KYA (Know Your Agent) is not merely conceptual. It represents a structural change in how financial institutions must manage identity, authorization, and traceability in an increasingly automated world.

From KYC to KYA: The Natural Evolution of Identity Control

For decades, financial institutions have built their control systems around KYC: identifying the customer, verifying their identity, monitoring their activity, and maintaining evidence of every transaction. This approach worked because the account holder and the acting party were the same person.

In an environment where agents operate autonomously, that symmetry disappears. Institutions now interact with a non-human actor that holds delegated authority and can execute complex operations independently. KYA emerges as the natural evolution of KYC: it does not replace human identity verification but rather adds a layer of control over the agents acting on the customer’s behalf.

The purpose of KYA is to answer questions such as: which agent is currently acting, under what authority, what are its operational limits, and how can this be demonstrated to an auditor or regulator?

Autonomous Delegation and the Associated Risks

The integration of AI agents introduces risks that did not exist in the traditional model. A legitimate agent may operate within its assigned permissions and still execute actions that create undue exposure, operational errors, or regulatory breaches.

This is not hypothetical. Recent studies on non-human identity management show that many organizations have limited visibility over service accounts, tokens, or API keys. If controlling these internal identities is already challenging, the risk multiplies when external autonomous agents operate on behalf of real customers, expanding the surface of responsibility.

The core issue lies in delegated authority: the account holder trusts the agent to make decisions on their behalf, yet the financial institution remains accountable to auditors and regulators for every action performed.

The risk surface becomes an ecosystem of interdependent actors, each capable of executing high-impact actions. Moreover, these risks extend beyond direct financial losses. The consequences may include:

  • Regulatory non-compliance, such as the lack of evidence regarding authorization or transaction traceability.
  • Exposure to internal and external fraud, where compromised agents can execute malicious transactions without triggering traditional alerts.
  • Reputational damage, if the institution cannot demonstrate to customers or regulators that adequate controls were exercised over every action.

In this context, the risk is no longer purely technological; it is a matter of accountability. Financial institutions must ensure that every action performed by an agent can be attributed, validated, and audited, regardless of the customer’s delegated authority.

Governance and Visibility: The Foundation of Trust

Managing autonomous agents requires going beyond conventional controls. Trust cannot be built solely on the assumption that an agent will act correctly; there must be a clear governance infrastructure capable of tracking, validating, and auditing every automated decision.

Among the critical elements are:

  • Robust human identity verification: The individual delegating authority must be fully authenticated through reliable and verified digital identity methods.
  • Authentication and recording of delegation: Each authorization must be documented with cryptographically verifiable evidence, ensuring that the intent and scope of the delegation can be demonstrated to auditors and regulators.
  • Clear definition of operational limits: Agents must operate under explicit permissions, with restrictions on critical transactions and differentiated access policies based on risk, transaction type, and customer profile.
  • Escalation mechanisms for critical operations: For high-impact actions — such as fund transfers, credential changes, or access to sensitive information — step-up mechanisms, additional validation, or mandatory human intervention must be implemented.
  • Comprehensive traceability and auditable evidence: Every agent interaction must be logged, including the specific agent, the user, the action performed, the policies applied, and any exceptions, enabling precise reconstruction of any event.

Within this ecosystem, digital identity and biometrics play a strategic role. Facephi provides a comprehensive solution that unifies identity verification, fraud prevention, and compliance/AML capabilities, ensuring that every delegated action can be verified and audited. This approach delivers strong evidence for auditors and regulators while guaranteeing compliance with global standards for traceability, control, and oversight.

Strategic Preparedness for Autonomous Agents

Organizations that understand and properly manage this new layer of responsibility will transform a potential risk into a competitive advantage. The key lies in establishing a strong control framework that integrates:

  • Robust trust signals.
  • Clear authorization policies.
  • Escalation mechanisms for high-risk operations.
  • Auditable evidence of every relevant action.

Institutions that adopt these principles will be better positioned to comply with international regulations such as the EU AI Act, DORA, and the latest AML/CFT updates, while maintaining trust and operational resilience against fraud, errors, and critical events.