OneID® | News and Events

The Compliance Challenges of Agentic Commerce

Written by The OneID Team® | 17/02/26 18:09

The Identity Problem

In traditional commerce, the identity question is straightforward. A person visits a website, identifies themselves through a login or a payment credential, and completes a transaction. The merchant knows who the buyer is.

In agentic commerce, an AI agent sits between the buyer and the merchant. The merchant receives a transaction request from software, not a person. The fundamental question becomes: who authorised this action, and how can the merchant verify that?

This is not a theoretical concern. Payment networks are already publishing frameworks that emphasise mandates, scoped permissions, and cryptographic proof of intent. Visa has introduced its Trusted Agent Protocol, a standards-based framework for verifying agent identity and intent in real time. Mastercard has launched Agent Pay with Agentic Tokens that tie AI agents to individual users through tokenisation technology.

The challenge is that most digital commerce systems are still designed around behavioural inference: click patterns, device signals, session flows, and historical activity. These signals weaken significantly when agents act asynchronously, operate across multiple platforms, and transact without direct human interaction.

Delegation and Authority

When a person delegates purchasing authority to an AI agent, that delegation needs to be explicit, scoped, revocable, and auditable.

Explicit means the person consciously granted the agent permission to act. Scoped means the agent can only operate within defined limits, whether spending caps, merchant restrictions, product categories, or time windows. Revocable means the person can withdraw authority at any time. Auditable means there is a complete record of what was authorised, what was executed, and what the outcome was.

Current agentic commerce protocols handle some of this at the transaction level. ACP uses Shared Payment Tokens scoped to specific merchants and amounts. UCP supports modular capabilities that merchants can choose to adopt. But the broader question of ongoing delegation governance, where a person grants standing authority to an agent across multiple contexts over time, requires identity infrastructure that sits beneath the commerce layer.

The Liability Question

When an AI agent makes a purchase that is later disputed, who is responsible?

If the agent ordered the wrong product because it misinterpreted the buyer's instructions, is that the fault of the buyer, the AI provider, or the merchant? If an agent was manipulated by a fraudulent listing, who bears the loss? If an agent exceeded its spending authority, can the buyer reverse the transaction?

Existing consumer protection regulations were built on the assumption that a human initiated the transaction. Error resolution and liability protections under regulations such as the UK Consumer Rights Act and payment standards like those from the Payment Systems Regulator assume a person made the purchasing decision. An AI agent does not fit neatly into these frameworks.

Legal analysis from firms including Crowe and Torys has noted that this accountability gap could result in liability shifting unpredictably between the user, the merchant, the platform, and the AI provider. Businesses entering agentic commerce need to define these boundaries explicitly in their terms, contracts, and operational procedures.

Regulatory Gaps

No comprehensive regulatory framework for agentic commerce exists anywhere in the world as of early 2026.

The EU AI Act, the most ambitious AI regulation to date, predates agentic commerce and contains no specific provisions for autonomous purchasing agents. Data protection laws like UK GDPR address how personal data is processed but do not contemplate scenarios where AI agents process data across multiple merchants in a single automated workflow.

Financial services regulation presents particular challenges. Anti-money laundering programmes and Know Your Customer requirements were designed around human identity. When an agent initiates a financial transaction, questions arise around beneficial ownership, transaction monitoring, and sanctions screening that existing frameworks do not cleanly resolve.

The industry is not waiting for regulators to catch up. The Linux Foundation established the Agentic AI Foundation in late 2025, backed by Anthropic, Google, Microsoft, OpenAI, and others, focused specifically on interoperability, identity, and payments infrastructure for autonomous commerce.

The Fraud Problem

Agentic commerce creates new opportunities for fraud. Visa reported a 450% increase in dark web posts mentioning "AI Agent" in the six months to early 2026 compared to the prior period.

The risks include agent impersonation, where bad actors create agents that mimic legitimate purchasing behaviour; credential harvesting through agent interactions; and the creation of fraudulent merchant networks designed to exploit autonomous purchasing flows. Traditional fraud detection systems were built to flag anomalies for human review, but agents can spin up thousands of targeted operations rapidly, adapting in real time to detection measures.

This is why compliance infrastructure, rather than behavioural monitoring alone, is critical to the agentic commerce ecosystem. Proving agent authority through verifiable credentials, binding identity to transactions cryptographically, and generating immutable audit trails provides a fundamentally stronger foundation than inferring legitimacy from behavioural patterns.

Frequently Asked Questions

What are the compliance challenges for agentic commerce?

The main compliance challenges are: proving who authorised an AI agent to act (identity), defining what the agent is permitted to do (delegation), determining who is liable when transactions are disputed (liability), navigating regulations designed for humans rather than agents (regulatory gaps), and preventing new forms of fraud targeting autonomous purchasing flows.

Who is liable when an AI agent makes a wrong purchase?

Liability is currently unclear. Existing consumer protection laws assume a human initiated the transaction. The responsibility could fall on the buyer, the AI provider, the merchant, or the platform depending on the circumstances. Legal experts recommend that businesses define liability explicitly in their contracts and terms of service before entering agentic commerce.

Is there a regulation specifically for agentic commerce?

No. As of early 2026, no jurisdiction has enacted regulation specifically addressing agentic commerce. The EU AI Act predates the technology, and financial regulations do not contemplate autonomous AI agents. Industry-led standards from organisations including the Linux Foundation's Agentic AI Foundation are filling the gap while regulators develop frameworks.

What is Know Your Agent?

Know Your Agent is an emerging concept that extends traditional Know Your Customer (KYC) principles to AI agents. It involves verifying the identity of the agent, confirming the human or organisation that owns it, validating its authorisation to act, and maintaining an audit trail of its actions. Several compliance providers, including OneID, are developing infrastructure to support this requirement.