Vibe Coding Isn’t the Problem - Vague Thinking Is
Why AI-assisted development exposes leadership, not recklessness

The term “vibe coding” has become a kind of shorthand insult. It’s usually used to describe a caricature: someone who doesn’t really know how to build software asking an AI agent to “make me an app” and then blindly shipping whatever comes back.
If that’s what someone means by vibe coding, then the criticism is fair. That approach is reckless.
But that definition quietly assumes something important: that the person doing the “vibing” doesn’t understand the problem space, architecture, or tradeoffs involved.
Because when someone does understand what they’re building, the same tools produce very different outcomes. In that context, “vibe coding” isn’t guesswork or abdication of responsibility. It’s accelerated execution driven by clear intent, constraints, and constant feedback.
The difference isn’t the tool.
It’s the clarity of the person using it.
What My Claude Code Workflow Actually Looks Like
A typical Claude Code session for me does not start with code generation. It starts with planning - deliberately and explicitly.
I explain what I’m trying to accomplish in as much detail as possible: the goal, where the feature fits in the system, the constraints that matter, and the outcomes that would be unacceptable. I’m not trying to trick the agent into guessing correctly. I’m trying to remove ambiguity before anything is built.
Then I wait.
The best possible next step isn’t a wall of code. It’s a plan - or even better, questions. When Claude asks good questions, that’s a signal that it understands the shape of the problem. When it proposes an approach, I review it the same way I would review a design proposal from a real development team.
If the plan is solid, we move forward.
If there are gaps, I fill them in.
If it’s heading in the wrong direction, I stop it and redirect or restart entirely.
None of this feels novel to me. It feels exactly like how features have been planned and executed with real teams for decades.
Which is why the fear around this approach is so confusing. Most engineering leads are not spelling out exact lines of code for their teams. They’re setting direction, reviewing plans, and correcting course early. This is the same responsibility model, just compressed into a tighter feedback loop.
The Difference People Keep Missing
What many critics call “vibe coding” is really just hands-off prompting - vague requests, minimal oversight, and blind trust in the output.
That is not what I’m doing.
The defining difference is feedback. I stop the agent when it drifts. I correct assumptions early. I restart sessions when the framing is wrong - which is no longer a meaningful cost now that these tools are fast and capable.
This is how teams already work. Requirements are imperfect, misunderstandings happen, and plans evolve. The only thing that has changed is that the “team” is virtual and responds instantly. The leadership responsibility hasn’t disappeared - it’s become more obvious.
Why Planning Mode Exists (and Why It Matters)
Occasionally, I won’t explain a requirement fully. Occasionally, the AI will misinterpret what I meant. That isn't a failure of the tool, and it isn’t evidence of danger. That’s exactly why planning mode exists.
The agent tells you what it plans to build before it builds it.
If you’ve ever led a project, this should feel familiar. You give requirements. The team proposes an approach. You review it, refine it, or reject it. Then execution begins.
Skipping that step isn’t “AI slop.”
It’s bad engineering discipline, regardless of whether the code is written by a person or a machine.
A Concrete Example: Hardcoded Values and Real Verification
A recent change in BlackOps made this distinction very clear.
I replaced hardcoded credit values throughout the app with a centralized, configurable system. It wasn’t a flashy feature, but it touched a lot of surface area. During the process, Claude repeatedly made a reasonable assumption: that once the main logic was updated, the hardcoded values were effectively gone.
They weren’t.
At multiple stages, I explicitly asked whether any hardcoded credit values were still being used. Claude answered confidently more than once that they had been removed, and more than once, that turned out not to be entirely true. Values were still referenced in edge cases, helper functions, or older paths that hadn’t been exercised yet.
The important part isn’t that the agent missed them. The important part is that I expected this to happen.
Because I understand how systems evolve, I kept asking. I asked again after refactoring. I asked again after the cleanup passes. I asked for proof, not reassurance. Eventually, I asked Claude to verify usage across the entire codebase and show me where values were still hardcoded. Only after that did I consider the work complete.
This is exactly how I would work with a human team.
If a developer told me, “Yes, that’s removed everywhere,” I wouldn’t take that on faith for a cross-cutting change. I’d ask how they verified it. I’d ask what they searched for. I’d ask what they tested. The fact that the “developer” in this case was an AI agent doesn’t change that responsibility.
The difference is speed, not standards.
The risk wasn’t that the agent made a mistake. The risk would have been not knowing to challenge it - or not caring enough to demand proof.
Why the Criticism Feels Out of Date
Most of the loud criticism around AI-assisted development isn’t grounded in how these tools are actually being used today. It’s anchored to early impressions - shallow demos, brittle outputs, and genuinely bad results from months ago.
There’s also fear layered on top of that, and that fear isn’t entirely irrational. This shift does replace people. Just not the people many expect.
If you’re not actively learning how to work with these tools - how to direct them, constrain them, and evaluate their output - your value to a team is going to erode quickly. Not because AI is magical, but because it amplifies the effectiveness of people who already understand systems and exposes those who never really did.
The Myth of “All AI Code Is Slop”
One of the most common claims I hear is that all AI-generated code is inherently low quality.
That argument collapses under minimal scrutiny. Most large codebases today are not written end-to-end by seasoned leads. They’re written by junior developers, offshore teams, and contributors implementing tickets they didn’t design. The quality varies widely, and everyone knows it.
In many cases, AI-assisted code - when guided by an experienced engineer - is cleaner, more consistent, and better reasoned than what’s already shipping. Pretending otherwise doesn’t protect quality. It just avoids an uncomfortable comparison.
This Shift Is Different
Every major tooling shift I’ve lived through made developers faster. None of them fundamentally threatened the role.
This one does.
What’s being replaced isn’t “developers” in general. What’s being replaced is vague execution - the ability to hide behind process, momentum, or someone else’s judgment. The developers most exposed right now are the ones who were never really steering the ship to begin with.
The role of the lead hasn’t disappeared.
But the tolerance for unclear thinking has.
The Real Takeaway
Working with AI agents has made one thing very clear to me: this isn’t a new category of work. It’s the same work, expressed differently.
AI doesn’t replace the responsibilities of a real development team. It mirrors them.
You still have to explain the problem clearly. You still have to review the proposed approach. You still have to catch incorrect assumptions. You still have to verify that cross-cutting changes were actually made everywhere they needed to be made. The only thing that’s changed is the speed at which feedback arrives.
AI doesn’t replace teams. It behaves like one - and exposes the difference between leadership and wishful thinking.
That similarity is why experienced engineers tend to have better outcomes with these tools. They already know how to lead a team. They already know that confident answers are not the same thing as verified ones. And they already know to ask the same question multiple times, from different angles, until the evidence matches the claim.
When people say “AI is dangerous,” what they’re often reacting to isn’t autonomy - it’s unreviewed work. But that risk has always existed. AI just removes the buffer where vague thinking used to hide.
Working with AI hasn’t changed my standards as a lead. It’s revealed whether I ever really had them.
The uncomfortable truth is that AI doesn’t lower the bar for engineering discipline. It raises it. If you were clear, methodical, and accountable before, these tools make you faster. If you weren’t, they make that obvious very quickly.
That’s not a tooling problem.
It’s a leadership one.
I wrote this post inside BlackOps, my content operating system for thinking, drafting, and refining ideas — with AI assistance.
If you want the behind-the-scenes updates and weekly insights, subscribe to the newsletter.


