Couldn't agree more. Using an AI code reviewer without knowing exactly what dataset it was trained on or how it flags vulnerabilities is just trading one security risk for another. Transparency in the 'black box' is going to be the biggest challenge for DevSecOps this year.
We Don’t Trust AI (and That’s a Good Thing)
8 Comments
Strongly agree.
The music analogy is excellent: amplification does not make something better — it only makes it louder. AI works the same way. It amplifies the quality of the underlying system, whether that system is disciplined or fragile.
That is why I don’t think the goal should be “more trust in AI.” The better goal is healthier skepticism, supported by better governance.
From the Flamehaven perspective, governance is not about replacing judgment or certifying truth. It is about preserving custody: what claim was made, what artifact supports it, what changed, what was reviewed, what remains unresolved, and what should not be overstated.
Muted trust is not a weakness. In AI-assisted work, it may be the discipline that keeps the system honest.
Please log in to add a comment.
This is a must-read for junior developers. There’s a temptation to use AI as a 'crutch' rather than an 'amplifier.' I like the Stack Overflow analogy: AI output should be the starting point for understanding, not the final 'Ctrl+V'. Maintaining skepticism is what actually forces us to learn how the code works under the hood.
Please log in to add a comment.
This is a solid take.
The “AI as amplifier” point is what most people miss. It doesn’t fix bad thinking, it just speeds it up.
The Stack Overflow comparison also lands well. The right way to use AI isn’t copy-paste, it’s interpret → adapt → own the outcome.
Low trust with high usage actually feels like the correct balance here.
Please log in to add a comment.
Nice piece, Steve. The amplifier metaphor hits home from the ML side too.
I spend my days building pipelines where AI-generated code can look perfect in dev and fall apart in production because it missed edge cases that only show up at scale. The DORA stats track with what I see on my teams - enthusiasm for the tool, healthy skepticism for the output.
One thing I'd add from the data engineering perspective: the "somewhat" trust level isn't just about code correctness. It's also about whether the AI understands why a particular pattern exists in your system. A lot of production code looks "wrong" in isolation but exists for reasons - data contracts, SLA requirements, downstream consumers. AI doesn't know that context. Neither does Stack Overflow, but nobody pretends it does.
The QA model comparison is solid. Same muscle, different tool.
Please log in to add a comment.
Please log in to comment on this post.
More Posts
- © 2026 Coder Legion
- Feedback / Bug
- Privacy
- About Us
- Contacts
- Premium Subscription
- Terms of Service
- Refund
- Early Builders
More From Steve Fenton
Related Jobs
- Studio+- Salesforce -Consumer Goods Cloud-Senior Manager - Tech ConsultingEY · Full time · Springfield, IL
- Full Stack Java/Go Developer (Bilingual English/Spanish)Dev Technology · Full time · Arlington, VA
- Machine Learning Engineer — Trust and Safety (Account Trust)Apple Inc. · Full time · Austin, TX
Commenters (This Week)
Contribute meaningful comments to climb the leaderboard and earn badges!