The Dangers of AI: What I Wrote 5 Years Ago Is Becoming Reality

posted 2 min read

A few days ago, I was listening to an episode of Diary of a CEO featuring Roman Rampolskiy. He spoke candidly about his fears of AI, fears that many of us in tech have quietly carried:

  • There are no clear guardrails around how AI is being developed.
  • The world is rushing head-first into adoption without enough
    reflection.
  • The risks, bias, privacy loss, misuse, are as real as the benefits.

As I listened, I remembered something. Back in my final year of undergrad, about 5–6 years ago, I wrote a paper on the dangers of AI. Out of curiosity, I dug it up. To my surprise, many of the concerns I outlined back then are exactly what global leaders and researchers are debating today.

What I Predicted Back Then

In my paper, I highlighted several key risks:

  • Job Displacement

Automation has the potential to eliminate millions of jobs, not just manual labor, but knowledge work too. What happens to society when machines can outperform humans at both?

  • Privacy Invasion

AI systems thrive on data. But unchecked data collection means surveillance, targeted manipulation, and the erosion of personal privacy at a scale we’ve never seen before.

  • Algorithmic Bias

Machines learn from us. And since our data reflects human bias, discrimination can get baked into algorithms, with AI amplifying, not eliminating, inequalities.

  • Autonomy Beyond Human Control

The scariest risk I wrote about was AI systems evolving toward autonomy, making decisions, rewriting rules, or acting in ways humans can’t easily reverse.

At the time, these points felt like thought experiments. Today, they’re showing up in headlines, boardrooms, and government hearings.

The Pace of Change

What struck me is not just that these risks are real, but how fast they’ve become real.

5 years ago, “AI” for most people meant chatbots or movie-like robots. Today, we’re talking about:

AI replacing customer service, coders, and creative roles

Generative AI building content at a speed humans can’t match

Governments scrambling to regulate something they don’t fully understand

The excitement hasn’t gone away. But the urgency of control, of responsibility, is finally catching up.

Why This Matters

As builders, engineers, and entrepreneurs, we’re often focused on what we can make. But the harder question is:

“Should we build this? And if we do, how do we prevent harm?”

It’s not enough to just celebrate breakthroughs. The more powerful our tools become, the greater the responsibility to anticipate consequences.

Frameworks. Guardrails. Ethical design. Transparency. These can’t be afterthoughts anymore, they need to be core features of AI itself.

Final Thought

Writing that paper in my final year felt like academic curiosity. Today, it feels like a personal reminder: technology always moves faster than society’s ability to absorb it.

The question we face isn’t whether AI will transform the world. It’s whether we can shape that transformation responsibly, before it shapes us in ways we can’t undo.

Do you think we’re moving fast enough with AI regulation, or are we already too late?

If you read this far, tweet to the author to show them you care. Tweet a Thanks

This is a sobering reflection. It’s striking how prescient your early paper was—many of those risks are now front and center in global discussions. The challenge seems to be balancing innovation with governance: AI is evolving faster than regulation and societal understanding can keep up. Are you seeing any frameworks or initiatives that give you hope that we can implement guardrails before it’s too late?

To be honest, I can't see any yet. All we can do is hope that a framework is developed soon

More Posts

The AI revolution is gaining momentum, here’s what I think will really matter in 2025

PeLae1 - Sep 16

The concept of scaling is one of the building block of Machine Learning.

rahul mishra - Sep 10

Most AI projects fail because companies try to boil the ocean instead of solving one simple problem.

Tom Smith - Sep 9

The Possibility of Training a Multimodal AI for Cryptocurrency Auto-Trading Decisions

Muhammed Shafin P - Jul 28

AI in Healthcare: How LLMs are Transforming Medical Documentation and Decision Making

Aun Raza - Sep 4
chevron_left