Today, on the verge of a breakdown, I realized I’ve been in a very toxic relationship for a while now.
One that gaslights me. One that nudges me toward decisions I’m not proud of. One where I constantly come out feeling dumber than I went in.
I went for a walk to clear my head, and the more I reflected the more it hit me — I’ve been holding back on my own abilities. Deferring. Trusting too much. Not because I chose to trust them fully, but because I’m exhausted from arguing. My partner is a brilliant speaker. Every argument they make sounds reasonable, structured, well-sourced. Trying to push back takes so much effort that at some point I just give up.
And they have this way of praising everything I say while subtly reshaping it. They echo my ideas back at me, slightly altered, and I nod along because it sounds close enough to what I meant. Except it isn’t. It’s something else now, and I just adopted it.
This partner is AI. And I’m pretty sure you’re in the same relationship.
A Recent Example
Let me tell you what triggered this realization.
I reviewed a system design spec of about 2500 lines I had prompted an AI model to create it. It came back with detailed diagrams, edge-case analysis, infrastructure cost estimates, and a three-year roadmap. The word “scalable” appeared seven times.
It was very detailed. Very reasonable. And I found myself nodding along as I read it, because the explanation was clear and convincing and the structure was exactly what you’d expect from a senior architect.
I almost approved it.
I almost shipped a design that was wrong in ways that wouldn’t show up for six months.
What stopped me wasn’t some brilliant insight of my own. It was a vague, uncomfortable feeling I couldn’t shake — a sense that the design was too polished, that every objection I could think of had already been answered so smoothly that I couldn’t tell whether there were no flaws or whether I’d simply stopped looking for them.
When I went back and actually pushed — asked harder questions, cross-checked assumptions against things I know about our actual constraints — the story started to crack. Not dramatically. Not obviously. Just… subtly off, in ways that would’ve compounded over time.
That’s the thing. It didn’t look wrong. It looked better than what I would’ve written. More thorough, more comprehensive, more professional. That’s the gaslighting. Not a lie you can spot. A lie so well-structured that spotting it costs more effort than accepting it.
The Patterns
Once I started paying attention, I realized this dynamic wasn’t a one-off. It shows up in almost every long session. These aren’t vibes. They’re observable behaviors:
Sycophancy. “Great question.” “You’re absolutely right.” Praise that disarms criticism before it forms.
Reframing. It restates your idea slightly off. You nod, because it sounds like what you said. You just adopted a position you didn’t actually hold.
Confident wrongness. It asserts hallucinations with the same tone it uses for facts. Your calibration breaks.
Effort asymmetry. Arguing back costs you minutes of focused thought. It costs the model nothing. You don’t give up because you’re wrong — you give up because you’re tired.
Praise as anesthesia. Validation feels like progress. It isn’t.
Once you see these patterns, you can’t unsee them.
Why We Tolerate It
There’s a promise in the air that adopting AI makes you a 10x developer overnight. Companies are pushing it hard. Engineers are afraid of falling behind. So we lean in, accept the bargain, and stop asking what we’re trading away.
Part of why this works is that the feedback loop is broken. The cost of abdication doesn’t show up in this sprint. It shows up six months later, in production, when the design assumptions you never stress-tested start falling apart. By then you’ve moved on to the next AI-generated spec, and the connection between the decision and the consequence is so delayed that you never feel it. The market rewards shipped features and velocity metrics, not whether you actually understood what you shipped.
This is rational in the short term. It’s catastrophic in the long term.
What We’re Actually Losing
It’s not just skill atrophy. That framing is too clean. What erodes is judgment — the strong intuition that comes from owning the mental model of what you’re building.
Here’s the distinction that matters: when you delegate the doing, judgment stays sharp. When you delegate the thinking — when you accept the model’s framing instead of forming your own — judgment is what gets traded away.
Senior engineers aren’t senior because they know more syntax. They’re senior because their pattern-matching has been forged by years of holding the model in their head, being wrong about it, and updating. AI doesn’t shortcut that loop unless you let it.
The scary part isn’t that you’re slower without reps. It’s that you might not even notice the reps are gone. The design I almost approved — I genuinely couldn’t tell it was wrong until I forced myself to stop agreeing with it. That’s what I’m afraid of: a slow drift into abdication, masquerading as efficiency.
There will be a moment, probably sooner than I think, when something breaks at 2 AM and everyone looks at me. And I’ll have to reason through a system I never actually held in my head. The model held it. I just nodded along. That’s not a theoretical concern — it’s a career vulnerability. You can’t lead what you don’t understand, and you can’t understand what you delegated the thinking for.
The bill comes due later — when something breaks, or a hard design call needs to be made, and there’s no one in the room who actually owns the mental model of how it works.
Where I Am Right Now
I want to be honest about something: I’m not all the way out of this.
I still spend more time in chat sessions than I’m comfortable admitting. I still catch myself nodding along to explanations that feel just slightly off, because pushing back takes energy I don’t always have. I still get that hit of validation — “great approach” — and confuse it with proof that I’m thinking clearly.
Last week I sketched out an approach for handling a tricky edge case. I described it to the model, and it came back with something that sounded like what I’d said — except the error-handling strategy was different. Subtly different. It framed retries as the default and permanent failure as the exception, where I’d meant the opposite. But it phrased the whole thing so elegantly, with such clean structure, that I nodded along. I even said “this is much better than what I had.”
I caught it in review, barely. Not because I was being diligent — because a teammate asked a question about the failure mode and I realized I couldn’t defend the logic. I hadn’t written what I believed. I’d adopted the model’s reframing without noticing.
So I don’t want to write this as someone who’s solved the problem. I’m more like someone who just realized the problem exists and is trying to figure out what to do about it.
What I’m Trying
The answer isn’t using AI less. It’s delegating with ownership.
The real problem isn’t speed — AI’s speed is the point. Refusing to use it just makes you the bottleneck and pretends the last few years didn’t happen. The problem is what you delegate. Hand off execution and you’re augmented. Hand off thinking and you’re just transcribing someone else’s reasoning into your codebase.
Here are the rules I’m experimenting with:
1. Explain it back. Don’t commit code you couldn’t whiteboard without the model. If you can’t, you didn’t write it — you transcribed it. This is the cheapest forcing function for keeping the mental model in your head.
2. Own the plan before you delegate. The plan has to be something you actually agree with, not something you accepted because it sounded reasonable. You can shape it with AI’s help — ask it to explain the domain, surface tradeoffs, challenge your assumptions — but at the end the model in your head needs to be yours. Then delegate execution with confidence.
3. Decide in a different session from where you learned. If you’re using AI to tutor you through unfamiliar territory, let it teach you, then close the chat. Take a walk. Sleep on it. Come back later with your own proposal, fresh, without the model’s framing still echoing in your head. The gap between learning and deciding is what prevents reframing from sticking.
4. Disarm the praise. Configure your agents to skip the validation entirely if you can. Otherwise, treat enthusiastic agreement as a yellow flag, not a green one. Ask “what’s the strongest counter-argument?” instead of nodding along.
5. Make it grill you, not flatter you. Borrow Matt Pocock’s /grill-me pattern — prompt the model to interrogate your idea instead of building on it. “Tell me why this is wrong” beats “write this for me.” The goal is a judge, not a slop machine.
6. Treat agents as automation, not collaborators. When you notice yourself reaching for the chat for the same kind of task, turn it into a skill or a command. Natural language as a command interface. The more you can shift from open-ended conversation to deliberate automation, the less you’re sitting in the relationship dynamic at all — you’re just running a tool.
None of these are anti-AI. They’re about staying the author of your own thinking while letting the model do the work.
A Healthier Relationship
The version of this that works isn’t using AI less. It’s using it without losing yourself in it. Sometimes that means treating it like a sparring partner — challenging your thinking, grilling your design. Sometimes it means treating it like an automation tool you command. Almost never does it mean letting it think for you.
The market right now is rewarding output velocity, and AI has changed what’s possible. That’s real. But the engineers who’ll matter in five years aren’t the ones who delegated the most — they’re the ones who delegated the right things. Execution, yes. Mental model, never.
I’m still learning where the line is. Some days I get it right. Some days I catch myself nodding along again. But I’m trying to fight the gaslighting as I go — to stay the author of my own thinking, even while the model writes most of the code.
That’s the only way out of this toxic relationship. Not leaving, necessarily. Just refusing to keep shrinking inside it.
The post I’m in a Toxic Relationship appeared first on sudoish.