A few days ago, I was listening to an episode of Diary of a CEO featuring Roman Rampolskiy. He spoke candidly about his fears of AI, fears that many of us in tech have quietly carried:
- There are no clear guardrails around how AI is being developed.
- The world is rushing head-first into adoption without enough
reflection.
- The risks, bias, privacy loss, misuse, are as real as the benefits.
As I listened, I remembered something. Back in my final year of undergrad, about 5–6 years ago, I wrote a paper on the dangers of AI. Out of curiosity, I dug it up. To my surprise, many of the concerns I outlined back then are exactly what global leaders and researchers are debating today.
What I Predicted Back Then
In my paper, I highlighted several key risks:
Automation has the potential to eliminate millions of jobs, not just manual labor, but knowledge work too. What happens to society when machines can outperform humans at both?
AI systems thrive on data. But unchecked data collection means surveillance, targeted manipulation, and the erosion of personal privacy at a scale we’ve never seen before.
Machines learn from us. And since our data reflects human bias, discrimination can get baked into algorithms, with AI amplifying, not eliminating, inequalities.
- Autonomy Beyond Human Control
The scariest risk I wrote about was AI systems evolving toward autonomy, making decisions, rewriting rules, or acting in ways humans can’t easily reverse.
At the time, these points felt like thought experiments. Today, they’re showing up in headlines, boardrooms, and government hearings.
The Pace of Change
What struck me is not just that these risks are real, but how fast they’ve become real.
5 years ago, “AI” for most people meant chatbots or movie-like robots. Today, we’re talking about:
AI replacing customer service, coders, and creative roles
Generative AI building content at a speed humans can’t match
Governments scrambling to regulate something they don’t fully understand
The excitement hasn’t gone away. But the urgency of control, of responsibility, is finally catching up.
Why This Matters
As builders, engineers, and entrepreneurs, we’re often focused on what we can make. But the harder question is:
“Should we build this? And if we do, how do we prevent harm?”
It’s not enough to just celebrate breakthroughs. The more powerful our tools become, the greater the responsibility to anticipate consequences.
Frameworks. Guardrails. Ethical design. Transparency. These can’t be afterthoughts anymore, they need to be core features of AI itself.
Final Thought
Writing that paper in my final year felt like academic curiosity. Today, it feels like a personal reminder: technology always moves faster than society’s ability to absorb it.
The question we face isn’t whether AI will transform the world. It’s whether we can shape that transformation responsibly, before it shapes us in ways we can’t undo.
Do you think we’re moving fast enough with AI regulation, or are we already too late?