The familiar pacing of software development has been completely compressed by agentic coding because you no longer have those routine stretches of just wiring things together to catch your breath. Writing code by hand, or trad coding as Twitter likes to call it, forced me to respect the natural ebb and flow of the problem solving process and get into a much more steady, sustainable rhythm.

Doing tasks manually naturally builds up the context required for the decisions involved later because you have time to process everything along the way and construct your mental model of the project's structure. Working with LLMs means you cold start more often than not since the code just appears in front of you (like relying on the tattoos from Memento to understand what's going on).

The process of managing agentic workflows turns programming into a string of variable psychological rewards (dare I say, gacha or slot machine mechanics?) followed by cognitive fatigue instead of the deep, focused intellectual labor that initially brought a lot of developers into this industry.

Changing How We Write Code

The LLMs are currently generating so many orders of magnitude more code than any single human could ever have the chance to properly debug or reason through. You find yourself signing off on an overwhelming amount of raw code just to keep up with the output that's expected these days.

You get there, inevitably and reluctantly, by ceding operational control to the model. While not wrong, we have to be honest with ourselves in admitting we're mostly just trusting the raw power of the tool to get the job done.

It is honestly surprising how well this functions most of the time, until you hit those edge cases where it completely falls apart and leaves you in this strange limbo where you are forced to keep using it for the productivity boost, but you can never actually trust it to run fully unsupervised.

The Illusion of Infinite Scale

The fatigue really scales up once you find yourself juggling several agents at once, simultaneously checking their work, and fixing missteps just to keep the whole development loop moving forward. The fundamental problem here is that the work drains you through constant judgment and relentless oversight.

This process requires more attention and context switching, along with way more decisions per hour. Making constant architectural, big-picture decisions while overseeing the work of a cracked junior dev is fundamentally harder than executing standard programming tasks yourself.

All of these continuous choices require a lot more mental endurance, since pausing to review generated output and decide on the next step takes time and energy and completely breaks the natural rhythm and momentum you build when doing the actual implementation yourself.

Navigating Decision Fatigue

You quickly bump into exactly how much a single human can feasibly keep track of with modern LLMs. Decision fatigue is, in my opinion, the next invisible friction point for developers.

I assume there will always be a pretty hard upper bound on what I can achieve as long as I have to stay in the loop and verify things until LLMs are completely better than humans in all cases, including review and verification.

You might only get four or five extremely intense hours before your brain is fully cooked, compared to managing eight to ten normal productive hours. Some of my friends are already completely burnt out from this cycle. They rarely say it out loud, but I can clearly tell by watching the way they approach their current projects.

Throttling the Agentic Loop

A lot of ambitious engineers think they just need to spin up more agents and ship more code so they can outwork everyone else, but MOAR is probably not the answer.

We have to acknowledge that while these automated systems can comfortably operate around the clock, the human supervisors can't sustain the cognitive load. Trying to is an almost guaranteed path to burnout.

Better review and verification loops sound like the obvious answer, but you have to figure out who is actually going to build them, you or the LLM. You have to ask yourself if you would really trust a verification system built by an LLM if you already do not completely trust the original code that it just generated for you.

That immediately leads to the next issue of how you actually verify that the verification system itself even works and how you guarantee that it is targeting the correct parts of your system and actually testing them correctly.

I honestly don't know the answer to any of that, but I am just thankful a lot of very smart people are working on this.