Skip to content

Ten Minutes on a Sofa: The Real News Story


Watching the news last night, a segment came on that I had zero interest in. So I did what everyone does, I pulled out my phone and started scrolling. Google suggested an article about ACE, a technique that lets AI agents build their own 'playbook' of strategies by learning from their successes and mistakes, continuously improving without retraining.

I skimmed it. Looked interesting. Potentially relevant to the agentic AI projects we've been working on.

So I uploaded the document to the Claude mobile app with a simple prompt: "Could we use this approach to improve any of the agentic projects we've been working on? Check previous conversations on Claude Code."

The response was detailed and enthusiastic. (Isn't it always?)

Claude laid out specific examples where the technique would help. Take our voice agent for querying sales data. Users constantly ask questions like "How do sales this year compare to last year?" The problem? Language models don't account for incomplete years. They compare year-to-date figures against full prior years, leading to wildly misleading conclusions about sales dropping significantly. Claude suggested ACE would quickly address this by adding learned context: "Sales data for the current year is incomplete. When comparing to previous years, use year-to-date figures for both periods."

The response included sample implementations for our projects.

So while the news spiralled deeper into the story I had no interest in, I asked Claude to pull the project code from GitHub and make the changes. By the time the segment ended and the news moved on to something more interesting, the branched code was uploaded to GitHub and ready for testing.

I spent a couple hours this morning testing it. Just committed the changes back to the main branch. It'll be in production tomorrow.

Let that sink in for a moment. Ten minutes. Sitting on my sofa, half-watching TV, I implemented a cutting-edge AI improvement technique across multiple production systems. Something that, just a year ago, would have required weeks of research, careful implementation, extensive testing, and multiple rounds of debugging.

The truly exciting part? This pace of change isn't slowing down, it's accelerating exponentially. In a recent interview, Benjamin Mann from Anthropic suggested that within two to three years, we'll have models that are 1000x more intelligent than today's versions.

One thousand times more capable. Not 10% better. Not twice as good. A thousand times.

If today's models can autonomously implement complex algorithmic improvements while I watch TV, imagine what 1000x more powerful means. Problems that seem impossibly complex today will become solvable. Ideas we can barely conceive of will become buildable. The gap between imagination and implementation is collapsing at breathtaking speed.

This is genuinely thrilling. But with this extraordinary capability comes extraordinary responsibility. Those of us building with AI, and the companies creating these models, must keep safety at the absolute forefront of everything we do. We need thoughtful development, rigorous testing, and a constant awareness of potential consequences. The same power that can solve humanity's greatest challenges can also cause tremendous harm if deployed carelessly.

And yet, there I was last night, watching the news obsess over the same tired political squabbles and manufactured controversies. This, what's happening with AI right now, this is the real news story. This is the transformation that will define our generation. We're witnessing the most profound expansion of human capability since the Industrial Revolution, unfolding in real-time, in ten-minute increments, on sofas around the world.

The future isn't something to fear, it's something to build responsibly. And for the first time in history, the tools to build it are available to anyone with curiosity, an internet connection, and a commitment to using these capabilities wisely.

Pretty exciting time to be alive. Let's make sure we get it right.

Let’s Talk About Your AI Goals

Got questions or ideas? Our experts are here to help. Book a quick call and see how inmydata can turn your AI plans into real results — fast, secure, and with no pressure.