24 April 2026 · Nick Finch
Your coding agent just got bought. Build like you might have to leave.
SpaceX just bought Cursor. The consolidation at the top is complete. Portability costs less than you think, and coding agents make the discipline that delivers it essentially free. Here is how we build at inmydata, and why this is the right way to work regardless of who owns your tool next.
On 21 April, SpaceX announced an agreement giving it the right to acquire Cursor. $60 billion later this year, or $10 billion to formalise an existing partnership. The deal pre-empted a $2 billion funding round that Cursor’s parent was hours from closing at a $50 billion valuation. It sits on top of the $1.25 trillion SpaceX-xAI merger completed in February, the largest in history.
The deal matters less than what it completes. Eighteen months ago the AI coding agent category had a vibrant set of independent options. Today the strategic tier has four owners. GitHub Copilot belongs to Microsoft. Codex to OpenAI. Claude Code to Anthropic. Cursor, assuming the option gets exercised, to SpaceX. A healthy ecosystem of smaller tools still exists below them, and some are doing excellent work, but the options at the top are narrowing and will likely continue to narrow.
If you want to know how these owners behave when their interests diverge, we have the precedent. In June 2025, the moment OpenAI’s acquisition of Windsurf became public, Anthropic cut Windsurf’s first-party Claude access with less than five days notice. Jared Kaplan, Anthropic’s co-founder, said “it would be odd for us to be selling Claude to OpenAI.” That is the pattern. Model providers cut distribution access when a competing lab owns the coding tool.
Which makes the uncomfortable question harder to avoid. A tool that writes a meaningful share of your code, shipped by a vendor whose owner is now a strategic rival of the model provider you depended on, is a supplier decision, not a productivity decision. Most enterprises picked these tools on capability alone, back when the ownership question did not exist. It exists now.
The response is not to avoid these tools. It is to build as if you might need to swap them.
How we build at inmydata
We use Claude Code for more than 90% of our development. A little Copilot around the edges. We are not swapping, and we are happy with our choice. But we build as if we might need to, and the cost of doing so has collapsed to the point where it is now just good engineering regardless of supplier risk.
The discipline has three parts and they work together. Every module in our codebase carries a structured header. Purpose, Responsibilities, Consumers, Dependencies, Notes. Every exported function carries a docstring with summary, parameters, returns, and side effects. Specifications and implementation plans live as markdown files in the repo, checked in alongside the code, and they are the source of truth for what we built and why. None of this depends on a tool’s proprietary configuration format. All of it travels between agents without translation.
Here is a fragment of a module header from our RAG pipeline, trimmed to the fields that matter for this illustration.
"""
module. relevance_judge
Responsibilities
- Score candidate chunks for relevance to the user's question.
- Enforce the judge prompt contract defined in specification.md.
Consumers
- retrieval_pipeline.assemble_context
- evaluation.offline_scoring
Notes
- Scoring rubric and cut-off thresholds are documented in specification.md.
- Do not mutate chunk objects, return a new scored list.
"""
Two things to notice. First, a new engineer, or a new agent, reads that header and knows within ten seconds what this module is for, who depends on it, and where the design decisions live. Second, the Notes field references specification.md. That cross-reference is the discipline at work. The module is telling the agent where to look for the rules that bound this code.
This level of documentation is not maintained by hand. We codified the standard into a /document skill. When a new module is created, the agent applies the skill automatically. For legacy code we run the skill across existing files and retrofit the discipline. Then, and this is the part that matters, we ask the agent to update its own instructions file to honour the standard going forward. Different tools use different filenames. Claude Code reads CLAUDE.md, Copilot reads copilot-instructions.md, and the next tool will read something else. The content is portable. The filename is not.
The agent enforces its own constraints. That is the interesting loop. The discipline does not depend on human vigilance, because the tool that writes the code is also the tool that reads the rules, and the rules live where it reads them.
The maintenance burden flip
The prevailing narrative says agentic coding generates a maintenance burden. Lots of code, lots of tests, lots of documentation, all moving faster than any human team can keep up with. The conclusion people reach is that quality has to suffer, and that documentation of this depth would drown the team.
The reverse is true.
Pre-agents, documentation to this standard was uneconomic. No team could justify the overhead of hand-writing structured headers on every module, docstrings on every function, and canonical specifications for every feature. Agents change the economics entirely. They generate the documentation. They maintain it when the code shifts. They read it on the way in to the next change, which means the agent arrives already loaded with context and does not burn tokens reconstructing it from scratch. The upfront cost is real but modest. The downstream saving is larger, because the most expensive thing an agent can do is rediscover what it already knew last session.
And the codebase ends up cleaner than a human team would ever have produced, because the standard actually gets applied everywhere. Not the usual mix of well-documented modules, half-documented modules, and modules with a stale one-line comment from 2019. Better code as a side effect. The objection is not just wrong, the reverse is true. This is how you get to a more maintainable codebase than you would ever have written manually.
Portability is the payoff
Because the discipline lives in the code and in the specs, not in a tool-specific configuration, any capable agent can pick up the work. The instructions file is itself a portable artifact. A new agent reads it, opens a module, understands the purpose and consumers from the header, finds the design decisions in the cross-referenced specs, and starts work.
The onboarding cost for a new tool is measured in minutes. Point it at the repo, rename the instructions file if the new tool expects a different name, and go. No rewrites of proprietary rules files. No re-encoding of team conventions into a new vendor’s configuration format. The conventions are in the code.
That is what portability actually looks like. Not a procurement rubric or a second-source policy, but an engineering property of the codebase.
This is how you should build anyway
The SpaceX-Cursor deal is one reason to work this way. It is not the main reason.
The main reason is that this is how you get better code, cleaner architecture, and faster onboarding, regardless of what happens to your vendor. If the Cursor deal had never happened, this would still be the right way to build. The consolidation reinforces the argument. It turns good practice into competitive necessity.
Organisations that adopt this discipline get resilience and quality in the same move. Organisations that do not are betting that their current tool, and the model provider it depends on, will still be aligned in three years. Given what we have just watched happen twice in ten months, that is not a bet most CTOs should be making.