Have you ever zoomed into a photo on your phone, shrunk an image before sending it on WhatsApp, or stretched a design to make it fit a screen?
That simple action is powered by scaling and in machine learning, it’s one of the most important transformations we use.
So what is scaling?
At its core, scaling is just about resizing:
Zoom in → make things larger
Zoom out → make things smaller
Stretch → grow in one direction, but not the other
In everyday apps, it’s what allows smooth resizing of images, videos, and even charts. But in ML, the same idea goes much deeper.
Scaling in Machine Learning:
Images – Before we feed photos into a model, we often resize them to a standard size (say 224×224 pixels). Without this, the model wouldn’t know how to handle inputs of different dimensions.
Data Augmentation – Imagine training an AI to recognize cats. If we only give it one image size, it’ll struggle when it sees a cat zoomed in or zoomed out. By scaling images during training, we make the AI more robust.
Feature Normalization – Think of a dataset with age (values like 18–70) and income (values like 30,000–1,000,000). Without scaling, the model might treat income as “more important” just because the numbers are bigger. Scaling helps put everything on a fair playing field.
Why should you care?
Because scaling is one of those hidden building blocks of ML that quietly powers almost everything:
- The photos you zoom in and out of.
- The recommender systems learning from balanced data.
- The neural networks that need well scaled inputs to actually converge.
Next time you resize a photo or pinch-zoom on your phone, remember, you’re applying the same concept that’s at the core of how ML works.
If you found this useful, drop a , share with a friend, or follow me for more practical breakdowns of Maths for Machine Learning.