2 April 2026 · Nick Finch
Agent identity is the easy part.
RSAC 2026 launched a wave of agent identity products. But the pattern that actually makes agents safe and useful already exists in coding agents. Here's how to apply it to enterprise systems.
Claude Code does not ask permission to read every file. It starts constrained, suggesting changes and waiting for approval. As you work with it, the trust builds. You let it edit files. Then run commands. Eventually, for tasks you have seen it handle well, it operates autonomously. By the time it is refactoring your codebase without asking, you have already built confidence through dozens of interactions where it proved itself.
That escalation pattern, from passive to supervised to autonomous, is not a feature of Claude Code. It is the solution to a fundamental problem. How do you give an agent enough autonomy to be useful while maintaining enough control to be safe?
Every mature coding agent has converged on the same answer. Trust is not granted at deployment. It is earned incrementally. And that pattern applies far beyond code.
Yet enterprise teams deploying agents to databases, infrastructure, and knowledge systems are not building trust escalation. They are watching RSAC announcements and waiting for identity vendors to solve the problem for them.
The vendor narrative is incomplete
RSAC 2026 was dominated by agent identity. Okta announced a full agent identity platform with discovery, registry, and shadow agent detection. Microsoft shipped Entra Agent ID, treating agents as first-class identities with conditional access and governance. IBM, Auth0, and Yubico partnered on a Human-in-the-Loop authorisation framework requiring cryptographically verified human approval for high-stakes agent actions. 1Password, Token Security, and others followed with their own frameworks.
The message from every direction was the same. Agents need identity.
That is true. But it is not sufficient. A credentialed employee with full system access and no supervision is still a risk. Identity tells you who the agent is. It does not tell you how much to trust it. The Cloud Security Alliance found that 60% of organisations cannot terminate a misbehaving agent and 68% cannot distinguish agent activity from human activity. Those are real problems. But they are not solved by giving agents better badges. They are solved by building the architectural layers that make agents governable in practice, not just in policy.
Three layers, not one
Through building agentic systems at inmydata, we have landed on three layers that work together. Authentication, audit, and trust escalation. Each one serves a distinct operational purpose, and together they satisfy compliance requirements as a byproduct of good engineering rather than as a checkbox exercise.
Authentication. The agent has credentials. You know who it is and who authorised it. Access is scoped. In our systems, agents connect to MCP servers through a proper OAuth layer. The user’s identity flows through, and data security is enforced down to column level within inmydata itself. For autonomous agents, we create dedicated agent users with explicit permissions. Every agent has an identity. Every identity has boundaries.
Audit. Complete logging of every action the agent takes. What tools it called. What data it accessed. What decisions it made. What information it returned. This is not optional and it is not an afterthought. The audit layer is what gives you the ability to understand what happened when something goes wrong, and it is what gives regulators the traceability they require. The EU AI Act reaches full enforcement on 2 August 2026. If you cannot trace an agent’s actions back to a human decision, you have a problem.
Trust escalation. This is the layer most enterprises are missing entirely. It is the staged progression from passive to autonomous, with human checkpoints at each stage. And it is the layer that actually makes agents useful, because without it you are stuck choosing between two bad options. Lock the agent down completely (safe but useless) or let it run freely (useful but reckless). Escalation gives you the middle ground.
What escalation looks like in practice
We are building expert systems for two engineering companies right now. One manages complex database infrastructure. The other manages physical engineering assets in the field. Both need agents that can eventually take autonomous action against production systems. That is a high-stakes environment where getting it wrong has real consequences.
Our approach is deliberately staged. The agent starts as an internal tool for the company’s own engineers. It retrieves information, answers questions, helps with diagnostics. Read-only. No actions.
When the engineers trust the agent’s judgment, it becomes customer-facing, but still read-only. Customers can ask it questions. It cannot touch anything.
Next, the agent starts suggesting actions. An engineer reviews every suggestion before anything is executed. The agent recommends. The human decides.
Then the agent suggests actions directly to customers, who can choose to execute them. The human is still in the loop, but the loop has widened.
Only after sustained evidence that the agent’s suggestions are consistently correct, well-scoped, and safe do we allow autonomous execution. And even then, everything is audited. The kill switch is always there.
Each stage generates evidence that informs the next. The audit trail from stage one tells you whether the agent is ready for stage two. Trust is not a configuration setting. It is accumulated proof.
The pattern is proven
This is not a novel framework. It is the same pattern every mature coding agent has converged on, and there is a good reason the industry landed here. The fundamental tension between usefulness and safety does not have a binary solution. You cannot solve it with identity alone. You cannot solve it with policy documents. You solve it by building a system where trust is earned incrementally and every stage is observable.
The vendors announcing identity products at RSAC are solving a real and necessary problem. Authentication matters. But authentication is one layer. The organisations treating it as the whole solution are the ones that will end up with credentialed agents they still cannot trust, govern, or safely scale.
Build it now
The organisations building all three layers today, authentication, audit, and trust escalation, are not just preparing for August. They are accumulating institutional knowledge about where the trust thresholds sit, which escalation stages matter for which use cases, and what the audit trail needs to capture. That knowledge compounds. Every agent they deploy teaches them something that makes the next deployment faster and more reliable.
The ones waiting for perfect vendor infrastructure or finalised regulatory guidance will start learning those lessons months later. The pattern is proven. The architecture is straightforward. Do not wait for someone to sell it to you. Build it.