AI Coding Agents: Notes on the Verification Loop


Fast generation is useful. Blind acceptance is expensive.

I now assume AI output is a draft until proven otherwise.

Minimal loop I rely on

  1. Generate patch
  2. Ask "what can break?"
  3. Test contracts + edge paths
  4. Add observability where failure would hurt
  5. Merge only when behavior is explainable

No loop, no trust.

What usually slips through

  • happy-path-only logic
  • subtle boundary/contract mismatch
  • thin error handling
  • retries/timeouts not thought through
  • "looks right" code that is hard to operate

PR gate that keeps quality up

Before merge, author should answer:

  1. What invariant must hold?
  2. Which test checks it?
  3. What is still untested?
  4. How will production tell us it is broken?

If these are unclear, the patch is not done.

Trend signals behind this note

Sticky takeaway

Use AI coding agents inside a verification system, not as a replacement for one.


Friendly Copyright & Sharing Reminder by Tushar Mohan.

Hey there! I’m thrilled you stopped by and hope my posts spark ideas of your own.

Feel free to quote short excerpts for commentary, reviews, or academic purposes—but please don’t copy, republish, or remix substantial portions without first getting my written okay.

Need permission? It’s easy—just drop me a note on my email or connect with me on any of the social media platforms I have linked here, with a quick outline of what you’d like to use, and we’ll sort it out fast. Thanks for respecting the work that goes into each post, and for helping keep the internet a place where creators and readers both thrive.

Unless I’ve credited someone else, all articles, code snippets, images, and other goodies on this site are

© Tushar Mohan, 2026.