When we write algorithms, one of the key things to think about is how much memory our solution needs. You might have heard terms like “in-place” and “extra space” — but what do they really mean?
Let’s break it down simply
In-Place Algorithms
An in-place algorithm changes the input data structure directly without using significant extra memory.
- You can think of it as modifying things where they already are.
- Typically, it uses only a few extra variables → O(1) space complexity.
Example (conceptual pseudo code):
for i from 0 to n-1:
if arr[i] == 0:
move non-zero elements forward within same array
Here, we aren’t creating any new array, just rearranging existing elements.
Real-world analogy:
Imagine rearranging books on a shelf without taking them off, just swapping positions. That’s in-place.
Extra Space Algorithms
An extra space algorithm creates a new data structure (like another array, list, or hash map) to store results.
This usually means O(n) or more space, depending on how much new memory we allocate.
Example (conceptual pseudo code):
create newArr
for each element in arr:
if element != 0:
append to newArr
add remaining zeros at the end
Here, we’re using a new array to store data. Easier to write and sometimes clearer, but consumes more memory.
Which One Should You Use?
It depends!
- If memory is tight (like in embedded systems or large-scale data), in-place is better.
- If clarity or simplicity matters more and you can afford extra memory, extra space is fine.
There’s often a trade-off between space and code readability, and understanding this balance helps you write efficient code at scale.
In my Medium article, I’ve explained this concept using a real coding problem, “Move all zeros to the end of an array”, step by step with visuals and dry run. Read it here on Medium