How to Recognize Which Algorithm a Coding Interview Problem Needs

How to Recognize Which Algorithm a Coding Interview Problem Needs

posted Originally published at www.codeintuition.io 6 min read

I'd solved around 300 problems by the time my third senior screen wrapped up. The medium that had just bombed the call was something I could explain back to myself in 90 seconds once it ended. The constraints were familiar. The technique was one anyone with my problem count had implemented before. The thing that wouldn't fire, in the moment, was the connection between the constraints and the technique.

The instinct, when that happens, is to grind more problems. That instinct cost me about two months. The thing that actually moved the needle was something else.

What I figured out

  • Volume practice was building memory of specific problems. It wasn't building the skill that recognises which technique applies to a problem you haven't seen before.
  • The default LeetCode loop trains around recognition, not at it. Reading the tag before attempting skips the part of the work that interviews actually test.
  • The fix is small and unglamorous. Read the problem for two or three observable features before any code. Name the technique. Then code.
  • Conditions matter as much as content. The LeetCode UI is friendlier than any phone screen, and it takes conscious effort to practise under conditions that resemble the real thing.

Quick disclosure before I get into this: I built Codeintuition, a structured learning platform for coding interviews. The story above is the version of this realisation that keeps showing up across engineers prepping for senior screens, almost verbatim. The closing link goes to the longer version on my own blog.

Where the approach was breaking

For most of the prep arc, every stuck problem got handled the same way. Attempt for fifteen or twenty minutes, hit a wall, glance at the tag, read a solution, move on. The progression chart said the weekly count was climbing. The interview chart said the bar wasn't moving.

Tracing what was actually being learned per problem made the picture clearer. The implementation got learned each time. The shape of the solution got learned. What didn't get learned was the answer to a question nobody was asking: why is this problem the kind of problem that takes this technique? The tag and the solution skipped that step every single time.

In a real interview, the tag is gone. So the thing that came for free in practice (which technique to use) becomes the thing the candidate has to figure out from scratch under a clock. The freeze makes sense in retrospect. Nothing in the practice loop had trained for it.

What started working

The shift was small. Before opening the editor, the rule was: spend 30 to 60 seconds reading just the problem statement, trying to name the technique and the reasons. No code, no guessing. The output of the pass is a sentence: "this is a [technique] problem because [feature one], [feature two], [feature three]." If you can't write that sentence, you don't start coding. Go back to the constraints with a checklist of features for each technique and try again.

Concretely:

What used to happen What started happening
Open problem, attempt for 20 minutes, glance at tag if stuck Read constraints first, name the technique in 60 seconds, only then attempt
Read solutions to learn the implementation Read solutions only after attempting, and look for which features in the statement point at the technique
Practise one technique in batches (15 sliding window problems back-to-back) Mix techniques in each session so the constraints had to be read before knowing what to use
Practise with the title and difficulty visible Cover the title with a sticky note, ignore the difficulty tag

The first time, this protocol feels slow. The second week, it feels slow but useful. By the fourth week, the features start firing on their own as the problem statement is read.

What that looked like on the next problem

The problem that finally landed for me was Longest Increasing Subsequence. The O(n^2) version was familiar. The patience-sorting O(n log n) version was familiar too. What wasn't trustworthy was the read of when LIS was the right tool versus when something else was.

The session with the new approach started with the constraints. Three features fell out:

  1. A single sequence, not two. That ruled out edit-distance-family techniques right away.
  2. The optimisation target was the length of an ordered subsequence. Order of the original sequence had to be preserved.
  3. The transition between elements was a comparison, not a sum or a count. "Strictly greater than the previous kept element" is a binary check.

Once those three features were written down, the technique was obvious. The simpler O(n^2) version was enough because the problem didn't require the faster one:

def lis(nums):
    if not nums:
        return 0
    dp = [1] * len(nums)
    for i in range(len(nums)):
        for j in range(i):
            if nums[j] < nums[i]:
                dp[i] = max(dp[i], dp[j] + 1)
    return max(dp)

The implementation wasn't new. The change was that the technique got reached for because the features matched, not because the exact problem had been memorised.

A second problem the same week

The same week brought Russian Doll Envelopes, which usually trips people up because it doesn't look like an LIS problem on the surface. The problem asks for the maximum number of nested envelopes given pairs of width and height.

Reading for features: a single sequence (after sorting), a length to maximise, a strict comparison between consecutive kept elements (one envelope must fit inside the next). That's LIS, run on the heights after sorting by width with a tiebreak rule. The reframe took two minutes once the read was for features instead of surface details.

That was the moment the recognition drill paid for itself. Two surface-different problems, same three features, same technique. Far transfer.

Why this works (the science to dig into later)

The learning sciences have language for this. Transfer of learning splits into near transfer (a problem that resembles one already practised) and far transfer (a problem that doesn't resemble anything practised, but whose structure can be read).

Volume practice mostly trains near transfer. Whether far transfer is reliably teachable is a debate the research hasn't fully settled. What the evidence does support is the specific intervention this protocol is built around. Explicit instruction in when and why a method applies, not just how to execute it, produces more transfer than practice alone.

In retrospect, the "how" was everywhere on the internet. The "when" and "why" were the parts no resource in the practice loop bothered to teach explicitly.

What to try if you're in the same spot

If problem 200 feels harder than problem 50, the bottleneck probably isn't volume. It's a recognition skill that volume alone doesn't build for everyone. A few things worth trying:

  1. For each technique you've practised, write down two or three observable features that signal it applies. Sliding window: contiguous range, condition across the window, optimisation on length. Two pointers: sorted array, search for a pair or triple, pointers move from the ends. Monotonic stack: per element answer, directional search, comparison. Keep the checklists short.
  2. On every new problem, run the 60 second feature pass before any code. Write a sentence: "this is a [technique] problem because [features]." If you can't write the sentence, don't code yet.
  3. Mix techniques across sessions. Three problems from three different techniques will train recognition in a way that fifteen problems from one technique won't.
  4. Cover the title and the difficulty before reading the constraints. Many problem names give away the technique, and the assistance is exactly the kind of help that disappears in an interview.

This is a practice protocol, not a product. It runs against any problem bank.

Where it's still hard

The recognition drill doesn't fix everything. There's a separate axis prep tends to skip, which is the conditions the practice runs under. The default LeetCode loop has the tag, the difficulty, the discussion thread, and no clock. None of those are present in a real interview. Even with recognition trained, the first time the tag disappears the read gets harder. So the last few weeks of any prep cycle, run every practice session under interview-shaped conditions: clock running, tag covered, discussion section closed.

That's the version of the lesson worth keeping. It wasn't the problem count. It was what was happening on each problem, and what nobody had ever trained on purpose.

I originally posted this on my own blog, with the feature checklists for six more techniques and a longer worked example for variable sliding window.

What's a problem that finally made a technique click for you, where the explanations you'd read before hadn't quite landed?

More Posts

I’m a Senior Dev and I’ve Forgotten How to Think Without a Prompt

Karol Modelskiverified - Mar 19

Tuesday Coding Tip 02 - Template with type-specific API

Jakub Neruda - Mar 10

How to Explain Algorithm Correctness in Coding Interviews

prakhar_srv - May 6

TypeScript Complexity Has Finally Reached the Point of Total Absurdity

Karol Modelskiverified - Apr 23

Your Tech Stack Isn’t Your Ceiling. Your Story Is

Karol Modelskiverified - Apr 9
chevron_left

Related Jobs

Commenters (This Week)

12 comments
1 comment
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!