Coding with Agents: My Experiences with Vibe Coding
Coding assistants are evolving at remarkable speed. Where we once had simple text completion or suggested snippets in an IDE, we now see autonomous coding agents that can be assigned a task, generate pull requests, and work independently for hours. Over the past couple of months I have experimented with several of these agents across a range of projects. So, what is it actually like to work in this new way, and what might it mean for the future of coding?
Testing the Top Coding Agents
After trying a number of different assistants, here are my reflections on my three favourites.
replit.com: A Cloud IDE with Built-in Agents
This is a pure online IDE with agents built in. It spins up its own environments on cloud VMs and works entirely within them. The interface is intuitive, and navigating the file system feels natural. It is particularly strong for architecture deployment, since it controls the entire environment, and it integrates well with GitHub. Hosting is included, so deployment is one click. Surprisingly, projects built in its Linux environment were easy to run locally on my PC. A key advantage is that it works on its own copy of the source code, so my development machine is not cluttered with dozens of small projects. I only pull code locally when I need to.
GitHub Copilot: Seamless Integration with Your IDE
This integrates seamlessly with IDEs such as Visual Studio and Visual Studio Code, and its GitHub integration is obviously excellent. You can assign tasks directly to Copilot and leave it to work. The first time I set a bug fix task before finishing for the evening and returned the next morning to find a pull request waiting felt like a genuine milestone.
Claude Code: A Command Line Powerhouse
This is a command line agent that is surprisingly straightforward to use. Avoid the add-ins for VSCode, as running it in the terminal is far better. It has strong GitHub integration, and the terminal-first approach seems to give it extraordinary versatility. I have even seen people using it to automate tasks beyond coding, which makes sense after a little experimentation. You quickly stop missing the IDE integrations that other agents provide. Claude is limited to Anthropic models, but since these currently outperform most others in coding tasks, it does not feel restrictive.
On balance, I found myself using Claude Code most often. Sonnet 4.5 consistently felt like the best model, and it performed particularly well when used with Claude Code. The combination of a leading model and an agent built by the same team produced a tool that felt the most capable overall. Its ability to explain, reason and build autonomously seemed stronger than the alternatives, even when they were running the same underlying model.
How Coding Agents Changed My Workflow
The biggest impact was that they made me more ambitious. Suddenly it felt possible to turn almost any idea into working software within minutes. For simple ideas, that really is the case. My first attempt was an SEO analysis tool. Built in replit, it was live on the web in about 20 minutes. Where I expected UI refinements to be a stumbling block, the agent handled them with ease.
More complex projects, however, highlight the current limitations. For example, we have long been working towards delivering an AI-driven dashboard builder for our data platform. Our original approach was to break the task into many tightly defined steps such as selecting data, choosing visualisations, and formatting numbers. This method works, but our recent experiments with agentic workflows showed that highly constrained designs prevent you from benefiting as models improve. A better approach is to remove as much scaffolding as possible and lean into the model’s intelligence. Even if it falls short now, the models will improve, and the solution will naturally become state of the art.
So I tried building an AI dashboard designer with minimal scaffolding. I made rapid progress at first, but soon ran into a problem that caused the agent to loop, misdiagnose issues, and make sweeping changes that were not required. We did eventually overcome it, and I drew some key lessons.
Lessons Learned When Working with Coding Agents
Do not be afraid to start again. This applies on two levels.
First, check in code regularly. As soon as the agent begins to struggle or veer off track, revert to the last working checkpoint and restart with a clean context. Endless back and forth rarely works. Reset and begin again.
Second, remember that you are the system architect, and the agents are your junior developers. It is your responsibility to design the overall architecture. You will not always get it right. Sometimes the only way to learn what works is to try. With coding agents readily available, it is easy to build prototypes. Start small, discard what does not work, and refine. Once you are confident in the end-to-end design, rebuild from scratch with a clear plan.
The Future of Coding with AI Agents
The future of coding is hotly debated. Some say learning to code is pointless or that software developers will soon be obsolete. I strongly disagree. What we are seeing is the next stage in the evolution of programming languages. Similar claims were made when compiled languages such as C and Pascal replaced assembly, and again as successive generations brought us closer to natural language.
Each leap has made coding more accessible, broadened what is possible, and increased the number of people able to participate. Coding agents are simply the next step, but on a much larger scale.