AI agents can buy things on your behalf, right?
but would you trust them enough to do so?
The challenge: today’s payment systems assume a human is clicking “buy.”
With agents in the loop, how do we make sure payments are:
- Authorized – Did the user really give the agent permission?
- Authentic – Does the agent’s request match the user’s true intent?
- Accountable – Who’s responsible if something goes wrong?
Google proposes to solve this problem through new Agent Payments Protocol (AP2)
How AP2 works:
Instead of relying on a human click, AP2 introduces Mandates – cryptographically-signed digital contracts that capture a user’s intent in a tamper-proof way.
Too many jargons?
Ok let’s understand this through an example
Imagine you tell your AI agent: “Buy me concert tickets when they go on sale, but only if they are under INR 10,000.”
Now, if an AI agent tries to “click” for you:
- The merchant has no way to know if the agent is really acting on your instructions or just making up the request.
- The system has no way to prove you actually gave consent for that specific purchase.
- There is no standard audit trail.
How AP2 solves it:
- You create an Intent Mandate: a signed digital instruction that says “buy tickets under INR 10,000”
- When tickets appear, your agent generates a Cart Mandate: the actual seats + price.
- The merchant receives both mandates as cryptographic proof that the agent is authorized, the price matches your conditions, and the purchase is valid.
Result: The agent buys securely without you being online, while the merchant is confident the payment is legitimate.
It seems with AP2, Agents can unlock new commerce experiences, like personalized offers, delegated purchases, coordinated travel bookings, etc.
What else do you think can be unlocked with AP2?