28 April 2026 · Nick Finch
Decades of business logic. One protocol away from being an agent.
Every ERP, CRM, and custom business system is sitting on decades of valuable logic. A2A is the protocol that turns that logic into a discoverable agent. Here is what we are shipping, and what it means for every vendor running a business application.
Last year I argued that the UI of every major business application is about to become irrelevant. Agents are replacing the dashboards, the menus, the click-throughs. That argument was about what is being lost.
This post is about what is being gained.
Decades of business logic, sitting inside the ERPs, CRMs, accounting platforms, and custom systems that run the enterprise, is about to become the most valuable raw material in software. Not because the logic itself is changing. Because the way it can be reached is.
The asset hiding in plain sight
Walk into any enterprise and ask what their ERP does. The answer will list features. Workflow management, financial consolidation, supply chain coordination, regulatory reporting. What that answer misses is what is actually inside the system. Twenty years of approval chains worked out by people who understood the business. Edge cases that took weeks of legal review to encode. Compliance logic that survived three audits and a regulatory change. Domain expertise that nobody bothered to write down because it lives in the code.
That logic is genuinely valuable. The companies that built it spent fortunes getting it right. The customers depend on it daily. It is the thing that makes the software worth what they pay for it.
The problem is that this logic has always been locked behind interfaces designed for humans. UIs that require navigation. APIs that assume predetermined integration. CLI commands that nobody outside the engineering team knows about. The agents that are increasingly driving enterprise work cannot reach any of it without somebody building a bespoke bridge.
That bridge is what changes now.
Tools versus agents
There is an obvious objection. If the goal is to make business logic reachable by agents, why not just expose it as a tool. MCP standardised that pattern almost two years ago. Anthropic donated it to the Linux Foundation. Every major AI platform speaks it. Why add another protocol on top.
The answer is that the valuable parts of business systems are not tools.
A tool is a function you call. You hand it parameters, you get a result back, the interaction ends. An agent is a decision-maker you delegate to. You hand it a task, it works the task, it might pause for clarification, it might take longer than expected, and it owns its own lifecycle while it does so. The two shapes are different.
Your accounting system does not just store data. It makes decisions about how transactions get categorised, what triggers an approval flow, when an anomaly warrants a flag. Your CRM does not just store contacts. It runs workflows that route opportunities, schedule follow-ups, escalate to managers. Your inventory system does not just hold stock levels. It allocates, reserves, and reorders based on rules that took years to encode.
Tool-shaped exposure works fine for the simple parts. Reading a balance, fetching a contact record, querying inventory levels. MCP is the right fit for that. The valuable parts, the workflows and decisions, are agent-shaped. They need a protocol that understands a task can take time, can fail, can ask for input, can be cancelled.
That protocol is A2A. Most vendors will end up shipping both, and we will come back to why MCP is not enough on its own. The point of this post is the second half.
The protocol just shipped
A2A v1.0 reached production-ready status this April, on the protocol’s first anniversary. 150 organisations supporting it. Native integration in Microsoft Azure AI Foundry, Amazon Bedrock AgentCore, Google’s Agent Development Kit, LangGraph, CrewAI, LlamaIndex Agents, Microsoft’s Semantic Kernel, and AutoGen. Linux Foundation governance under the same body that took stewardship of MCP last December. The platinum members include AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI.
This is not pre-production tooling. It is invisible infrastructure happening in front of you. Every framework you might choose for your agent strategy already speaks it.
What we are shipping
At inmydata, we run a platform called Knowledge AI. It serves expert knowledge to agents. Until this week, every consumer of that platform has needed bespoke integration. Each chat client built on LangChain, each voice agent built on LiveKit, each customer’s custom agent, all of them required custom glue to talk to Knowledge AI.
This week, we ship a dual-surface design.
The first surface is A2A. A typed Agent Card lives at a well-known URL describing what the platform can do, how to authenticate, and what events it streams. A JSON-RPC endpoint accepts tasks. The platform reports progress through streaming task events as it runs the gate, retrieve, judge, reformulate, answer pipeline. Any agent that speaks A2A can find this surface, read the card, authenticate, and start using it. LangChain agents handle this natively. So do CrewAI, ADK, and Semantic Kernel.
The second surface is REST with server-sent events. LiveKit voice agents register their backends as callable functions inside the worker process. They need low-latency token streaming for voice responses, where the cost of wrapping every call in JSON-RPC adds latency without benefit. The REST adapter exposes the same agent core through a simpler shape designed for that latency profile.
Both adapters terminate in the same agent core. Both share the same auth pipeline, the same tenant boundaries enforced by row-level security in the database, the same observability. The protocol differences end at the adapter layer. The principle is that A2A is the external agent surface. Internal services stay in-process. The right protocol for each consumer, not one protocol forced onto everything.
This is the engineering reality. We are not adopting A2A because it is trendy. We are adopting it because it solves a specific problem, which is letting agents we have never seen consume our platform without us writing the integration. We are also keeping the simpler REST shape because it is the right tool for low-latency paths. Both choices earn their place.
”But MCP already has discovery”
This is the sharper version of the criticism. MCP also has dynamic tool discovery. An MCP host can call tools/list against a server it has never seen, read the available tools, and use them. So what does A2A actually add. Why not just use MCP for everything.
The difference is what gets discovered, and what that implies for the interaction.
MCP discovers tools. The surface is a list of functions, each with parameters and return types. The interaction shape is request-response. The host owns the reasoning loop and orchestrates the tools inside it.
A2A discovers agents. The surface, the Agent Card, describes a participant with its own identity, its own authentication, its own skills, and its own lifecycle. Tasks have status. They can be long-running. They can fail, pause for input, or be cancelled. The two parties interact as peers.
That difference matters because the valuable parts of business systems do not fit a tool shape. An accounting system’s approval flow is not a function call. It is a task with a lifecycle, possibly pausing for human input, possibly running for hours, possibly ending in rejection. Forcing it into a tool wrapper either flattens out the lifecycle or pushes the lifecycle complexity into every consumer’s code.
Then there is the second-order effect. Both protocols support discovery, so an agent on either can extend its reach at runtime. The difference is what extending reach actually buys you. An MCP host that discovers more tools gets a richer toolbox, but the reasoning loop is still its own and the tools it finds are leaf nodes. An A2A agent that discovers another agent gets a peer, and that peer can extend its own reach the same way, including by discovering further agents itself. Capability composition is recursive in a way tool discovery is not.
That recursion is what makes agentic systems compound rather than plateau. An agent finds a peer, delegates, measures the outcome, and one of the next actions available to it is “ask a different agent.” The peer is doing the same thing on its side. MCP gives you a richer toolbox. A2A gives you a runtime ecosystem of peers extending each other.
The protocol is the channel that makes the ecosystem possible. The criticism stops being interesting once you see the system effect.
Discoverability is automatic
Once the A2A surface is shipped properly, discovery is not something we build. It is something that happens.
We are not writing the agent that finds Knowledge AI. Somebody else is. Or will be. Our job is to make sure the Agent Card is correct, the skills are typed accurately, the streaming events fire when they should, and the authentication works. Once that is live, every agent in the ecosystem can find us. A LangChain client somebody is building today. A CrewAI agent in three months. An agent in a framework that does not exist yet. They all discover the same surface.
This is a different posture than building integrations. We are not stitching agents to the platform one at a time. We are exposing a discoverable surface and letting agents find it. The work shifts from the integration layer to the platform layer. Get the platform right, and the integrations can be automated.
The same shift, on every platform
The same logic applies to every business application sitting in the enterprise.
If you run an ERP, your platform is not the UI. Your platform is the workflows, the validation rules, the regulatory compliance, the financial consolidation logic that took twenty years to encode. The customer values that. The agents replacing the UI cannot use it without you exposing it.
If you run a CRM, your platform is the workflows that route opportunities, escalate deals, and trigger follow-ups. The agents need access to that decision-making, not just the underlying contact records.
If you run a custom business system that nobody else has, the logic in that system is your moat. Make it discoverable.
Salesforce announced Headless 360 at TDX on 15 April. The largest CRM vendor in the world made every capability of its platform callable by agents. 60 new MCP tools for tool-shaped access. A2A generally available alongside its Agent Fabric coordination layer for the agent-shaped parts. The platform is now an API, an MCP tool, or a CLI command, with A2A on top for agent-to-agent collaboration. A 25-year-old business platform decided that its UI is no longer the product. The data, the business logic, and the interfaces that make those things callable by agents are the product.
This is not theory. This is shipping. The largest incumbent in the category just made the bet. The smaller vendors, the bespoke business systems, the in-house platforms that run real workflows for real companies, all face the same decision now.
The next move
The decades of business logic you have built are your most valuable asset, your moat. The customer values it. The agents replacing the UI need it. The protocol that makes it reachable just shipped, with native support in every major framework.
Wrap simple interfaces with MCP. Wrap your logic in an agent. Ship the A2A surface. Get discovered.
The systems that do this become foundational infrastructure for every agent the customer will ever deploy. They do not just survive the UI collapse. They become more valuable on the other side, because the logic that was hidden behind clicks is now reachable by every workflow that needs it.
The systems that do not, get replaced. Not because the logic was bad, but because the agents that drive enterprise work could not reach them. The customer moves to a vendor whose platform is discoverable. The years of work disappear into a system nobody can find.
The agents are being built right now. The question is whether they discover your platform or work around it.