A practical guide to identifying common algorithm patterns for efficient problem-solving.

posted 6 min read

The Developer's Field Guide to Algorithm Patterns (and How to Stop Guessing)

Have you ever faced a coding problem on platforms like LeetCode or HackerRank and felt paralyzed? You read the prompt, understand what's being asked, but your mind goes blank. "Where do I even start?" you ask yourself. "Is this a loop? Two loops? Recursion?"

This feeling is incredibly common. We often try to solve algorithm problems as if each one were a unique, uncharted island. But the truth is, the vast majority of problems fit into a limited number of fundamental patterns.

What if, instead of tackling a problem from scratch, you could recognize its "family" and already have a good idea of the right strategy and tools to solve it? This article is exactly that: a field guide for identifying the most common algorithm patterns based on keywords and the problem's structure. Let's learn to stop guessing and start recognizing.

The Lens We'll Use: A Quick Review of Complexity (Big O)

Before we start pattern hunting, we need to adjust our lens. The goal of choosing the right pattern is almost always to find an efficient solution. To measure this efficiency, we use Big O Notation.

The objective isn't to measure the exact time in seconds, but rather to understand how the execution time grows as the size of the input (n) increases. If you double the size of the list, does the runtime double? Does it quadruple? Or does it barely change?

Remember the two golden rules of Big O simplification:

  1. Drop Constant Factors: A loop that iterates through n items and one that iterates through n/2 items have the same linear growth. Both are O(n).
  2. Keep Only the Dominant Term: An O(n²) operation followed by an O(n) operation is simplified to O(n²), because the quadratic term dominates the growth for large n.

With that in mind, let's explore the most common patterns.

The Field Guide: Identifying 8 Essential Algorithm Patterns

Sliding Window
  • When to Think About It: The problem involves a linear data structure (array, list, string) and asks for a contiguous sub-section (subarray or substring) that is optimal in some way (longest, shortest, max sum, etc.).
  • Keywords / Telltale Signs: "contiguous subarray," "substring," "longest/shortest length," "maximum/minimum sum," "contains k distinct elements."
  • Typical Problem: "Given an array of integers, find the maximum sum of any contiguous subarray of size k."
  • Typical Complexity: O(n).
Two Pointers
  • When to Think About It: The problem involves a sorted array (or one that can be sorted) and asks to find a pair or a triplet of elements that satisfy a condition. Usually, one pointer starts at the beginning and the other at the end, and they move toward each other.
  • Keywords / Telltale Signs: "sorted array," "find a pair/triplet," "target sum," "remove duplicates," "palindrome."
  • Typical Problem: "In a sorted array, find two numbers that add up to a value X."
  • Typical Complexity: O(n).
Backtracking
  • When to Think About It: The problem asks to find all possible solutions or a single solution that satisfies complex constraints by building the answer step-by-step. If a choice leads to a dead end, you "go back" (backtrack) and try another. It's a structured Depth-First Search (DFS).
  • Keywords / Telltale Signs: "find all combinations," "permutations," "subsets," "all possible paths," "puzzle," "Sudoku," "maze."
  • Typical Problem: "Given a set of numbers, find all unique subsets."
  • Typical Complexity: Exponential, such as O(2ⁿ) or O(n!).
Dynamic Programming (DP)
  • When to Think About It: It's an optimization problem (find the maximum/minimum) or a counting problem (count the number of ways) where the solution to a larger problem can be built from the solutions of smaller, overlapping subproblems.
  • Keywords / Telltale Signs: "number of ways to...", "minimum/maximum cost path," "maximum/minimum value you can get," "maximum profit." If you find yourself solving the same subproblem multiple times in a recursive solution, it's a sign to use DP.
  • Typical Problem: "Knapsack Problem: given items with weights and values, what is the maximum value that can fit in a knapsack of capacity W?"
  • Typical Complexity: Polynomial, such as O(n²) or O(n*m).
Breadth-First Search (BFS)
  • When to Think About It: The problem involves graphs or matrices and asks for the shortest path from one point to another, assuming all steps have the same cost. BFS explores the neighborhood in "layers."
  • Keywords / Telltale Signs: "shortest path," "fewest number of steps," "levels of a tree," "unweighted graph," "maze."
  • Typical Problem: "In a matrix with walls, find the shortest path from the top-left corner to the bottom-right."
  • Typical Complexity: O(V + E) (V = vertices, E = edges).
Depth-First Search (DFS)
  • When to Think About It: The problem involves graphs or matrices and asks to explore paths to their end, check if a path exists, or find connected components. It does not guarantee the shortest path. (Backtracking is a type of DFS).
  • Keywords / Telltale Signs: "find a path (any path)," "check if a path exists," "connected components," "cycles in a graph," "count islands."
  • Typical Problem: "Given a matrix of 1s (land) and 0s (water), count the number of islands."
  • Typical Complexity: O(V + E).
Heap (Priority Queue)
  • When to Think About It: The problem asks to find or keep track of the "top K" elements (the K largest or K smallest) from a collection of data, especially if the collection is very large or a stream.
  • Keywords / Telltale Signs: "k-th largest/smallest element," "top k," "median of a stream," "schedule tasks" (based on priority).
  • Typical Problem: "Find the 10 most liked tweets from a real-time feed."
  • Typical Complexity: O(n log k) to find the top K elements.
Binary Search
  • When to Think About It: The problem involves finding an item in a sorted array. More advanced uses include optimization problems where you can "guess" an answer and check if it's valid, and the search space for answers is monotonic.
  • Keywords / Telltale Signs: "sorted array," "find the index of," "the smallest/largest value X such that condition P(X) is true."
  • Typical Problem: "Find the first number in a sorted array that is greater than or equal to Y."
  • Typical Complexity: O(log n).

Appendix: How to Identify Complexity in Your Code (The Quick Guide)

To help connect theory to practice, here are the most common code patterns and their complexities:

  • O(1) – Constant Time: The code takes the same amount of time regardless of the input size. There are no loops that depend on n.

    def get_first_element(my_list):
        return my_list[0] # Accessing an index is O(1)
    
  • O(log n) – Logarithmic Time: The algorithm cuts the problem in half with each iteration. Think Binary Search.

    The loop in a Binary Search throws away half the list with each step.
    
  • O(n) – Linear Time: A single loop that iterates over the input of size n.

    def find_max(my_list):
        max_val = my_list[0]
        for number in my_list: # The loop goes through each of the 'n' elements
            if number > max_val:
                max_val = number
        return max_val
    
  • O(n log n) – "n-log-n" Time: Usually associated with efficient sorting algorithms (Merge Sort, Quick Sort). Python's built-in sorted() function is a good example.

  • O(n²) – Quadratic Time: A nested loop where both loops depend on the input size n.

    def find_pairs(my_list):
        pairs = []
        for i in my_list: # Outer loop (n)
            for j in my_list: # Inner loop (n)
                pairs.append((i, j))
        return pairs # Total: n * n = n²
    
  • O(2ⁿ) – Exponential Time: Often found in a recursive function that calls itself twice (or more) for an input of n-1.

    def recursive_fibonacci(n):
        if n <= 1: return n
        # For each n, it generates two new "trees" of calls
        return recursive_fibonacci(n-1) + recursive_fibonacci(n-2)
    

Conclusion: Stop Memorizing, Start Recognizing

Mastering algorithms isn't about memorizing hundreds of specific solutions. It's about building a mental toolkit of patterns. When you encounter a new problem, instead of searching for a pre-made answer, try to identify its characteristics: is it about a contiguous subarray? Think Sliding Window. Does it ask for the shortest path? Think BFS. Does it need all possible combinations? Think Backtracking.

This field guide is your starting point. The more you practice with this pattern-recognition mindset, the faster and more intuitive your problem-solving process will become.

If you read this far, tweet to the author to show them you care. Tweet a Thanks

Great guide Matheus, really clear on algorithm patterns. Which pattern do you find beginners grasp the quickest?

What I find most powerful about this field guide is the shift from memorizing solutions to recognizing problem patterns—almost like building a mental map of algorithms. How do you personally train that recognition skill so it becomes second nature during coding interviews or competitive programming?

Great question, Andrew! It made me reflect on my own learning path.
I think beginners grasp the concept of Two Pointers and Binary Search very quickly because the real-world analogies are so strong (two people walking towards each other, searching a phone book).
However, the first patterns people often implement are based on structures like Stacks and Queues. That was my experience in college. It wasn't until I read the book "Grokking Algorithms" which I highly recommend that patterns like Binary Search really clicked for me as practical problem-solving tools.
So, I'd say Two Pointers/Binary Search for the conceptual "aha!" moment, but patterns involving Stacks/Queues for the first hands-on implementation.

That's a fantastic question, Akmal, and you've perfectly captured the goal with the "mental map of algorithms" analogy!
For me, it boils down to a simple, consistent loop:

Daily Practice: I use the LeetCode daily challenges to force myself to solve at least one problem a day. Consistency is key.

Reinforce by Articulating: This is the most crucial part. After I solve it, I always write up and post my solution in the discussion forum. This act forces me to name the pattern and explain why it works. It's the difference between "getting it right" and "truly understanding it."

Active Keyword Scanning: While reading a problem, I'm actively looking for those keywords and signals from the "field guide." "Contiguous subarray"? My brain immediately flags "Sliding Window." "Sorted array" + "find a pair"? That's a "Two Pointers" alert.

It's that cycle of Practice -> Articulate -> Associate that, over time, turns pattern recognition into a reflex.

More Posts

Autocomplete Demystified: A Common Use-Case of Trie

Brian Baliach - Aug 24

A Practical guide to Async and Await for JavaScript Developers

Mubaraq Yusuf - Jan 8

I built a free tool to practice the hardest part of coding interviews: choosing the right algorithm

Matheus Ricardo - Aug 25

OpenSSL Made Easy: A Practical Guide to Generating and Signing X.509 Certificates

stjam - Feb 8

When to Choose FastAPI Over Django or Flask: A Comprehensive Guide with Practical Examples

Esubalew - Jan 22
chevron_left