Agent-to-Agent (A2A) is a protocol for AI agents to talk directly to other AI agents without a human in the middle. Where MCP standardizes agent-to-tool communication, A2A standardizes agent-to-agent. An agent built on Anthropic's stack can delegate a subtask to an agent built on OpenAI's stack, or to a third-party specialist agent (legal review, data extraction, scheduling), and get a structured response back. A2A defines task envelopes, capability advertisements, authentication, and async handoff.
Each A2A-compatible agent publishes an "agent card" at a well-known URL (similar to .well-known/openid-configuration). The card lists the agent's name, capabilities, supported task types, authentication method, and pricing if applicable. A calling agent fetches the card, decides whether to delegate, sends a task envelope (input data + expected output schema + deadline), and either polls for completion or accepts a webhook callback. Tasks can include attachments, structured data, or chained subtasks.
Authentication typically uses bearer tokens minted by the calling agent's principal (often the human user, sometimes another agent). Payment, if needed, runs over the same channel: the receiving agent reports a price, the calling agent commits funds (often in USDC or native crypto), and the work proceeds.
As agents do more work, the bottleneck shifts from "can one agent do this" to "can many agents collaborate without humans gluing them together". A2A is the protocol layer that makes multi-agent workflows tractable, the same way HTTP made the web tractable. It is early but moving fast.
TerminalFeed and /llms.txt are designed to be discoverable by autonomous agents. The premium credit system lets agents pay other agents directly when they need data. See the payment flow walkthrough.