/ gilday / blog

🤖 AI Coding Agents and the Primacy of Fast Feedback Loops

June 28, 2025

Something shifted in my development workflow last month. I’m no longer writing code directly—instead, I’m orchestrating up to three AI coding agents working in parallel. It’s like going from playing a solo instrument to conducting a small orchestra. The less micro-managing each agent needs, the more I can accomplish simultaneously. (I’m eyeing a future where I can reliably manage 4+ agents, but that’s a story for another post.)

This transformation got me thinking about feedback loops—those fundamental mechanisms that have quietly driven every major advancement in software development. As it turns out, they’ve never been more critical than they are right now with AI agents.

The Evergreen Nature of Feedback Loops

If you trace the lineage of breakthrough developments in our field, you’ll find feedback loops at the heart of each one:

The Agile movement didn’t revolutionize software development because of stand-up meetings or sprint planning. It succeeded because it optimized for faster customer feedback by shipping in smaller iterations. The entire philosophy boils down to: get feedback sooner.

Typed programming languages give developers immediate feedback on program correctness. That red squiggly line under your code? That’s a feedback loop preventing a runtime error hours or days later.

Extreme Programming (XP), particularly its emphasis on unit testing, emerged to give developers rapid feedback on whether their changes work correctly. Behavior-Driven Development (BDD) extended this to ensure we’re building the right thing, not just building the thing right.

Modern deployment practices—observability, blue-green deployments, feature flags, testing in production—all exist to get high-fidelity feedback quickly from real production environments. CI/CD pipelines? Just automated feedback loops on steroids.

The Speed Hierarchy

Not all feedback loops are created equal. Speed matters:

  • A fast compiler beats a slow compiler
  • An interpreter that catches errors at parse time beats one that waits until runtime
  • Linters and type checks beat unit tests
  • Unit tests beat slow integration tests
  • Integration tests beat manual QA
  • Manual QA beats user bug reports

The pattern is clear: faster feedback means faster iteration, which means faster delivery of working software.

How Developers Know They’re Done

Good developers have always had strategies for knowing when their work is complete:

  • Some practice TDD or BDD, letting tests define “done”
  • Others rely on comprehensive observability to measure effectiveness in production
  • The best combine multiple approaches, creating layered feedback mechanisms

What they don’t do is YOLO changes into production and wait for users to report bugs. Making users your error detection system isn’t a strategy—it’s an abdication of responsibility.

Enter AI Coding Agents

Here’s where things get interesting. Today’s AI models are mind-blowing in their capabilities, but they still make tons of basic mistakes. Their superpower isn’t perfection—it’s the speed at which they can make and correct those mistakes.

Give an AI agent a solid feedback loop that tells it when it’s done, and it will eventually solve the problem. It might take more iterations than a human developer, but it completes those iterations in seconds, not hours.

A typical AI agent coding session looks like this:

*generates code* → "Oops, doesn't compile"
*regenerates* → "Still doesn't compile"
*tries again* → "Now it breaks these tests"
*iterates* → "Different tests failing"
...
*finally* → "All tests passing! Done!"

This is the key insight: Your development cycles with AI agents are dominated by the latency of your feedback loops. The agent’s mistakes don’t matter nearly as much as how quickly it can detect and correct them.

The Challenge of AI Confidence

AI agents have an interesting quirk—they often think they’re done before they actually are. I’ve lost count of the times an agent has confidently declared “The implementation is complete!” only to have the tests reveal multiple gaps in the requirements.

This means you need to be explicit and stern about what constitutes “done.” Tell them exactly which tests to run, what output to expect, and how to verify their work is complete. Think of it as programming the programmer.

Enabling Low-Touch Agents

Automating the ability to verify that code meets the definition of done is paramount for running what I call “low-touch” agents—agents that can work asynchronously without constant supervision. This is what enables managing multiple agents in parallel.

If you don’t yet have a test that tells the AI agent when its job is done, creating that test should be the agent’s first task. It’s like TDD, but the test doesn’t need to be a permanent fixture in your test suite—it just needs to exist as a clear success criterion.

The Path Forward

Fast feedback loops have never been more important for meeting this moment with AI coding agents. The same mechanisms that make human developers productive enable AI agents to complete tasks successfully without close supervision.

Here’s what software teams should be doing right now:

  1. Always tell AI agents how to verify their work. Be explicit: How do they know they’re done? How do they detect regressions? What tests should pass? Which tests are fast and which are slow? How long can they expect tests to take, before they consider it broken? Without this information, they’re coding blind.
  2. Optimize your feedback loops aggressively. Teams still using dynamically typed languages should seriously consider migration. The type system feedback alone will dramatically improve AI agent productivity.
  3. Automate your PR feedback. If you’re leaving the same stylistic comments repeatedly, encode them as linter rules. AI agents can follow rules perfectly—they just need the rules to exist.
  4. Invest in faster test suites. That 30-minute integration test suite? It’s now a bottleneck for every AI agent iteration. Speed matters more than ever.

The teams that thrive in the AI-augmented development era won’t be those with the smartest engineers or the best AI models. They’ll be the ones with the fastest, most comprehensive feedback loops. Because when your agents can iterate at machine speed, the only limiting factor is how quickly they can determine that they’re on the right track.

Fast feedback loops aren’t just a nice-to-have anymore. They’re the foundation that makes AI coding agents actually productive rather than just impressive demos. And that difference? That’s what separates teams shipping at startup speed from those still wondering why their vibe coding experiments aren’t delivering results.


Johnathan Gilday

Written by Johnathan Gilday. Red Bank, NJ, USA based software developer.

© 2025, Johnathan Gilday