How I'm Coding These Days
To say things have changed a lot over the last year in software engineering would be like saying “Minneapolis gets chilly in the winter.” Technically true. Not even close to capturing the full intensity of the experience.
Twelve months ago (or ~5,000 AI years), I’d sit down to work on a feature and start the same way I always had: open an IDE and write something like:
from x.foo import y
Today I sit down-or, even crazier, pull out my phone—and fire up Claude Code. And I start by saying something like:
“Okay, we’re working on feature X. I’m going to paste some requirements. Before we write code, I want you to figure out the blast radius, ask me questions, and propose a plan.”
That’s the shift for me: I’m still responsible for the code I ship, but the interface is now conversation-first. Less “type code until it compiles.” More “co-design, plan, then ship in tight loops.”
So… how did I get here? And perhaps more importantly: how am I navigating this new world of agent-enabled, “AI-native” coding without turning the code I ship into a treehouse of horrors?
The short version of the journey
Prior to late 2024, I’d dabbled with AI coding here and there. I had access to some of the early stuff (including early days GitHub Copilot), and for a long time my verdict was: cool party trick, mildly annoying coworker.
The "assistance" was intrusive and jarring. I’d start writing a function and suddenly I’d have 10 lines of suggestions I didn’t ask for, half of which was wrong, but in a way that was almost right, which is the worst kind of wrong. Every now and again it nailed it, and I’d have a “holy sh*t” moment. But, not often enough to justify changing my whole workflow.
Then in January of this year, I started a greenfield project that leaned heavily on AI as a feature, and I decided to lean back in to AI coding. That’s when things started to click.
As winter turned into spring, I spent time with Cursor and then finally got deep into Claude Code. My timing with Claude Code lined up with the release of Sonnet 4, and suddenly the “AI coding” experience stopped feeling like a novelty and started feeling like an actual superpower.
Not because it wrote perfect code (it doesn’t). But because it could: • understand enough of the codebase to be useful, • help me reason about changes, • and get me 70% of the way to a plan fast.
As the year went on, I was able to engender some enthusiasm in the eng-team around AI engineering, and as people adopted Claude Code, and with the continued improvement of the models, Honor embraced "AI-native" engineering as a first-class workflow. Claude Code has become fully baked into how we build.
That’s not to say there aren’t tradeoffs or dangers. There are. But my current hypothesis is:
We might become worse coders and better engineers.
Coding trivia matters less. Architecture, systems thinking, product judgment, and clear communication matter more. The job is shifting under our feet. I don’t fully know where it’s headed. I just know pretending it’s not happening is a great way to get run over.
How I use Claude Code
Here’s the workflow I keep coming back to. It’s not fancy. It’s just consistent.
My default loop (the thing I actually do)
- Co-design / plan (explicitly “no code yet”)
- Pick the smallest slice that moves the feature forward
- Implement that slice (touch 1–2 files if possible)
- Run the tests / lint / typecheck (or at least something)
- Read the code + diffs
- Claude review + human review, then repeat
Lots of planning (and I say that out loud)
I almost always start in a co-design phase. I’m spending way more time up front scoping, understanding blast radius, exploring how things currently work, and forming a plan.
I’ll literally tell Claude: “We’re not coding yet.”
Then I ask it to: • ask me questions, • push back on my assumptions, • propose options with tradeoffs, • and write a file-by-file plan (plus a narrative of how the changes flow).
That plan is the difference between “agent as power tool” and “agent as chaos goblin.”
Work in small, manageable chunks
One of the easiest ways to get burned is to turn an agent loose and have it edit 15–20 files in one go. I’m not even talking about PR size. I’m talking about the change-set per iteration.
If we have a solid plan, we can work through it a file or two at a time. That gives me space to: • review diffs, • ask questions, • adjust the approach, • and actually understand what’s changing.
If I let it go wild across a dozen files, my brain turns into a browser with 47 tabs open running on an underpowered machine (if you know you know).
Maybe you can hold 20 changes in your head at once. If so, kudos. I cannot. So I don’t pretend.
I use dictation
This was an awkward transition. I felt like an idiot at first.
But I bought SuperWhisper, committed to two weeks, and now I can’t imagine not doing it. I can type pretty fast. I can talk really fast. And more importantly: I say more helpful context than I’d ever bother typing.
Did my wife roll her eyes so hard she briefly saw the back of her skull when she hears me talking to Claude? Almost certainly.
Am I faster and better because of it? Probably.
I read a lot of code (because I still ship it)
Ultimately, I’m responsible for what goes out the door. Obvious, but worth saying.
Unfortunately, I have found myself reviewing a pr and thinking: “The person who pushed this does not know what’s in this.” This is not a good place to be.
So I read the code. I walk through it. I ask Claude to explain decisions. I check that we followed the plan. I’m trying to avoid “write-only code”—the kind of code that’s technically correct but so hard to understand that the only safe change is adding another layer on top.
Claude reviews every PR
Recently, at Honor, I've set up a GitHub action that reviews every PR when it’s opened in all our repos. I was already doing this locally, and the feedback was usually solid. Yes, it throws false positives sometimes. I’ll take “annoying but cautious” over “LGTM! ship it” any day.
The review gives me a starting point and frees up my attention for the things I still do better: architecture, scaling implications, and human readability.
Git worktrees (only when it’s actually worth it)
If a scope of work can be split into discrete chunks, or I want multiple approaches to choose from, I’ll spin up a few worktrees and have agents work independently. Then I’ll open PRs back into the main feature branch.
The key is: do this for a reason.
Doing it just so you can brag you have three agents coding at once is not worth it. In the worst case, you’ll buy yourself a pile of gnarly merge conflicts and a weird sense of accomplishment about your new job as “merge conflict wizard.”
Bottom line Things have been, and will continue to, change quickly. What is working well today, may be laughably out of date tomorrow. All I know for sure is the role of software engineer is changing. Where we end up...🤷🏻