Good read. Empathy in reviews really changes the vibe. How do you handle really stubborn reviewers?
Code Reviews Without Drama: How Great Teams Give Feedback That Actually Helps
5 Comments
Thank you @[Ionuț H. Stan], that’s a great question. Empathy definitely helps, but it doesn’t always solve the “stubborn reviewer” problem on its own.
What’s worked for me is separating style disagreements from real risks as early as possible. A lot of stubbornness comes from implicit preferences being treated like objective issues. Making that explicit (“is this a correctness concern or a style preference?”) often defuses things quickly.
If it’s still stuck, I try to shift the conversation from opinions to constraints:
- What problem are we trying to avoid?
- What trade-off are we optimizing for?
That usually reframes the discussion away from “my way vs your way” into something more objective.
When that doesn’t work, I’ve found it helpful to:
- propose a concrete alternative and ask for specific concerns
- or escalate lightly: pull in a third opinion, not to “win,” but to unblock
And honestly, sometimes the best move is to just align on a team convention and move on. Not every disagreement is worth the cycle time.
The underlying pattern I’ve noticed: stubborn reviews often signal unclear standards or missing context, not just difficult people.
Curious if others lean more toward escalation, or just default to team conventions in those cases.
Please log in to add a comment.
Working with distributed systems has changed how I think about code reviews. In highly decoupled architectures, reviews aren’t just about catching issues, they’re one of the few places where shared context actually gets rebuilt.
In cloud-native setups (Kubernetes, microservices, infra-as-code), the intent behind changes is often invisible in the code. The real reasoning tends to come from constraints, scaling expectations, or historical decisions that never made it into documentation.
That raises a question for me: where should that context live during a review?
Should reviewers be expected to already understand the system-level picture?
Or is it on the author to bring that context into the PR, even at the cost of verbosity?
I’ve been leaning toward the latter—treating PRs as a space to briefly explain intent, trade-offs, and constraints, especially for infrastructure or concurrency-heavy changes. It’s a bit more effort upfront, but it seems to make reviews smoother and more meaningful.
One more thing I’ve been wondering about: in async teams, how do you keep reviews from feeling transactional? Even solid feedback can come across as blunt without small signals of tone or intent.
Curious how others handle this in complex or multi-team environments.
Thank you @[ElenChen], this really resonates, especially the emphasis on framing, intent, and keeping reviews collaborative rather than judgmental.
On your points, a few things have worked well in my experience:
- Context should be pulled in by the author, not assumed by the reviewer
In theory, reviewers “should” know the system, but in practice, that doesn’t scale. Especially in distributed or multi-team environments, missing context is one of the biggest sources of weak or frustrating feedback.
The article’s point about “explaining the why, not just the what” is key here . I’ve found that when authors include intent, constraints, and trade-offs upfront, reviews shift from nitpicking to actual engineering discussions.
- PRs as lightweight design artifacts
I strongly agree with treating PR descriptions as more than summaries. Not full design docs, but at least:- what problem this solves
- why this approach was chosen
- what risks or trade-offs exist
This aligns with the idea that reviews aren’t just about correctness, they’re about knowledge sharing and team alignment. Without that, reviewers are guessing.
- Questions > judgments (but with a caveat)
Turning feedback into questions is powerful, it lowers defensiveness and invites discussion.
That said, I think there’s a balance:
- Use questions when exploring or unsure
- Be direct when something is clearly wrong or risky
Otherwise reviews can become too soft or ambiguous.
- Make feedback directional, not just granular
One thing I think is often missing (and the article hints at) is review summaries.
Instead of only line comments, a top-level note like:
“This is solid, just a few readability tweaks” or “I think the approach might need rethinking because of X”
This helps avoid the “maze of comments” problem and gives the author a clear sense of where they stand.
- Keeping reviews human (especially async)
This is the hardest part, and also the most underrated.
A few small things make a big difference:
softening tone (“might”, “what do you think about…”)
acknowledging what’s good
separating preference from correctness
Because, as the article points out, reviews sit at the intersection of ownership, identity, and time pressure, so tone isn’t cosmetic, it’s structural .
Please log in to add a comment.
Thanks for the article. I totally agree with you but sometimes, eve after applying this, you really need to be patient and have a kind of self control. As i experience an environment where some of my team member were really easily irritate but the slightest review message and just ask for approval.
I used to came near them try to speak calmly despite the obvious anger he/she was directing at me to prove my point. With time, it started to become easy to deal with them.
Just to say that review should be written according to people character too if you know them well and sometimes, better to go directly to them for discussion.
This is such an important addition, thanks for sharing it @[Waffeu Rayn].
You’re absolutely right: even with the “right” review style, emotional context still matters. Code reviews don’t happen in a vacuum, and some environments (or personalities) require extra patience and awareness.
I especially like your point about adapting to people. In an ideal world, good practices would be enough, but in reality, understanding how someone receives feedback can make the difference between collaboration and friction.
Going directly to talk in person (or on a quick call) is often underrated. Written comments can unintentionally amplify tension, while a calm conversation can defuse it in seconds and build trust over time, exactly like you experienced.
What you said about it becoming easier over time is key too. Consistency in tone and intent tends to “retrain” how people perceive reviews. It’s slow, but it works.
Really appreciate you bringing the human side of this into the discussion.
Please log in to add a comment.
This hits hard—especially the idea that code reviews are more about communication than code.
One thing I’ve noticed is that bad reviews don’t just hurt in the moment—they slowly train people to avoid ownership. When every PR feels like judgment, devs either go silent or play it safe instead of thinking deeply.
Your point about “explain the why” is probably the highest ROI habit. Without it, reviews become instructions; with it, they become learning loops.
I’d add one more layer:
great teams treat code reviews as shared responsibility, not gatekeeping. The goal isn’t “approve or reject,” it’s co-authoring better code.
Also loved the “questions over judgments” part—that single shift can turn a defensive thread into a productive discussion instantly.
In the end, speed doesn’t come from rushing reviews—it comes from trust + clarity compounding over time.
This is a fantastic way to frame it, @[Prasoon Jadon], you’ve captured the long-term impact better than I did in the article.
That idea that bad reviews “train people out of ownership” is spot on. It’s subtle, but over time it changes behavior: people optimize for approval instead of understanding. And once that happens, quality and learning both take a hit.
I also love how you described “explain the why” as a learning loop. That’s exactly the intention, turning reviews from a checkpoint into a feedback system that compounds knowledge across the team.
Your addition about shared responsibility is
“Co-authoring better code” is such a strong mental model. It shifts the dynamic from evaluation to collaboration, which is where the real leverage is.
And yes, questions over judgments is one of those small changes that has an outsized effect. It keeps curiosity in the conversation instead of triggering defensiveness.
That last line you wrote might be the perfect summary of the whole topic: speed comes from trust and clarity, not pressure.
Really appreciate this perspective, this is exactly the kind of thinking that builds great teams.
Please log in to add a comment.
Please log in to comment on this post.
More Posts
- © 2026 Coder Legion
- Feedback / Bug
- Privacy
- About Us
- Contacts
- Premium Subscription
- Terms of Service
- Refund
- Early Builders
More From Gavin Cettolo
Related Jobs
- Engineering Manager: Lead Distributed Systems & TeamsCoinbase · Full time · Springfield, IL
- PHP Web Engineer - Modernize & Mentor TeamsScanSource · Full time · Greenville, SC
- Language Data Annotator ( Spanish)Innova software Services Inc · Full time · Canada
Commenters (This Week)
Contribute meaningful comments to climb the leaderboard and earn badges!