30 April 2026 · Nick Finch

Stop scheduling knowledge capture. Let your agents decide what to capture.

Interloom just raised $16.5 million to capture tacit knowledge before experts leave. The right problem at the right time. But there is a second half nobody is shipping yet, the agent that tells you what knowledge it needs, the moment it needs it.

AI Agentic AI RAG Knowledge Capture MCP

In March, Interloom raised $16.5 million in venture funding to capture tacit knowledge for AI agents. They are not alone. KNOA launched on Product Hunt in January, an AI agent that runs structured interviews and produces documents. Squirro is selling a speech-to-graph pipeline for exit interviews. Narratize is shipping a Knowledge Capture Agent into the aerospace sector. Toloka launched Tendem as an MCP server connecting agents to a network of 10,000 verified domain experts.

The problem these companies are solving is real, and it matters. Most of what experts know is not written down. When they leave, retire, or move on, the knowledge leaves with them. Capturing that knowledge before it walks out the door is exactly the right thing to do.

But there is a second half to this story that nobody seems to be shipping yet. And the second half is where the system genuinely starts to compound.

The half that gets shipped

Every product I have just listed solves the same shape of problem in roughly the same way. A human decides what knowledge matters. A senior engineer is about to retire, a strategic review needs context, a project kickoff needs background. An interview gets scheduled. The AI runs the conversation. A document gets produced. The knowledge is preserved.

This is good work. Pre-emptive capture is the right defence against attrition risk. Without it, organisations lose decades of expertise overnight when the people who built the systems leave. Interloom’s framing of corporate memory is accurate. The problem is genuine and the market is real.

But pre-emptive capture has a limit. Every interview reflects a guess about what knowledge will matter. The expert talks about what they think is important. The interview covers what someone guessed would matter. And the resulting document captures whatever survived that double filter of human judgement.

What it does not capture is whatever nobody thought to ask about.

The gap that appears in production

If you run an agentic system in production, you will discover something interesting. The questions users actually ask are not the questions you predicted they would ask.

We see this every day at inmydata. We run an agentic RAG platform called Knowledge AI that serves expert knowledge to agents across customer deployments. The retrieval pipeline runs the same way every time. Hybrid search combining vector and lexical retrieval. RRF fusion. A quality threshold. A secondary classifier for grey-zone results. We wrote about the architecture in Stop tuning your prompts.

What that pipeline produces, when it works correctly, is either a confident answer grounded in retrieved context, or an honest “I don’t know” when nothing relevant is found. That second outcome is not a failure mode. It is a signal.

When the retrieval returns nothing on a subject the user has asked about, that is deterministic evidence of a knowledge gap. Not a heuristic. Not a guess. A specific question came in, the system looked for relevant context, and there was none. The gap is real and the topic is known.

That signal is exactly what most systems throw away. The user gets the apology, the agent moves on, and the next user with the same question gets the same apology. The system never closes the loop.

The third MCP server

We described our architecture in MCP won. Now build something with it.. Three MCP servers. The first serves operational data from ERPs and CRMs. The second serves expert knowledge from our agentic RAG platform. The third does something different. It triggers knowledge generation.

The third server is the one that catches the gap signal. When the RAG retrieval returns no chunks on a subject, the third server fires. It does not retrieve anything. It requests something.

In our interview platform, customers register their domain experts and tag each one against the subjects they know about. When a gap is detected, the third server looks up which expert covers that subject and the agent reaches out automatically to schedule an interview. No human triage step. The signal came from production. The handoff is automatic.

The expert receives an email with an invite link. They click it, the interview app opens, and they have a spoken conversation with a voice agent. The agent has a directed brief covering the gap, but it also follows up on interesting points the expert raises and drills into detail where the conversation gets useful. The feedback we get from experts is that it feels natural and enjoyable, more like a thoughtful chat than a form-filling exercise.

When the conversation ends, the platform processes the transcript automatically. A chunking agent with a deliberately constrained prompt produces a structured narrative organised by subject. The constraint is explicit. The agent must stick to exactly what the expert said. No filling gaps. No inference. Every claim must trace back to the raw transcript.

The expert then receives a second email saying the narrative is ready for review. They open the UI and see each chunk derived from the conversation. Crucially, clicking on a chunk surfaces the underlying section of the transcript it came from. They are not approving a polished summary against memory. They are checking each chunk against the source. When they approve, the chunks pass through the same quality gate that all knowledge entering our RAG platform passes through, regardless of source. An agent assesses each chunk, catches duplication automatically where it can, and flags genuine conflicts to a human for resolution.

Then the chunks are vectorised and indexed. Entity extraction runs to identify relationships between concepts in the new content, which feeds the entity-relationship search alongside the vector search. The original recording, the full transcript, and the derived chunks are retained permanently. If a user later flags an answer as wrong, the system traces the chunk back through the narrative to the source statement. Every piece of knowledge in the corpus is auditable to its origin.

The next user who asks a similar question gets an answer.

The loop is the point

Each step in that flow is engineered, but the value comes from the loop closing. User asks question. Gap detected. Expert interviewed. Knowledge captured, approved, indexed. Next user gets an answer. The system gets smarter from real usage, not from external review cycles or pre-emptive guesswork.

This is what closed-loop agentic systems are supposed to look like. The industry has been talking about closed loops for two years. What has actually shipped, in most cases, is one of three things. Memory of past conversations. Write-back of model-generated outputs to the corpus. Generic feedback frameworks where humans review answers and retrain models. None of these are closed loops in the meaningful sense. The first is just memory. The second is risky because the corpus expands from what the model said, not from what was actually true. The third has no automation in the trigger.

The fourth pattern, demand-driven capture from real production gaps, fed by the customer’s own domain experts, written back to the corpus as living knowledge, is what closes the loop properly. The trigger is automatic, the source is authoritative, and the result is permanent. Every cycle makes the system measurably better at the things users actually need.

This is also where the compound returns live. A batch-captured corpus is a fixed asset. It depreciates as the world changes around it. A demand-driven corpus grows where the demand is, in the shape of the demand. The cost of asking the agent a question it cannot yet answer drops with every cycle, because the next user with that question gets an answer instead of an apology. The system adapts to its users rather than waiting for users to adapt to it.

Schedule what you must, capture what you find

The argument is not that pre-emptive capture is wrong. Interloom and the batch-capture vendors are solving a real problem and you should still schedule capture sessions for the knowledge you can identify as critical. Senior engineers leaving, projects winding down, strategic transitions, all of these warrant pre-emptive interviews and the resulting documents are valuable.

But scheduled capture handles only the knowledge you know you need. Demand-driven capture handles the knowledge you did not know you needed, the gaps that only appear when real users ask real questions in production. The first is a defence against attrition. The second is the engine of adaptation.

Every agentic system that compounds in capability does it through closed loops. The trigger from real usage. The action that brings new information into the system. The result that improves future behaviour. Without the loop, you have a corpus that depreciates. With the loop, you have a system that grows where the work is being done.

Build for the loop. The agents that drive your system already know what they need. Let them ask.

Turn expertise into infrastructure

We build living knowledge bases that capture your experts' knowledge and get smarter through use.

Explore Knowledge AI