AI is making phishing attacks personalised and smarter

Leader posted Originally published at www.linkedin.com 1 min read

For Example:

Traditional Phishing: “Dear Customer, your bank account is locked. Click here to reset.”

Targeted phishing: “Hi Nikhilesh, this is Rahul from Winsaga Edutech’s partner team. Here’s the updated contract you requested yesterday, please review and sign.” (actually a malware link)

This isn’t random spam.

Hackers are now using AI to:

Scan your digital footprint (LinkedIn posts, company websites, even old resumes)

Craft emails that sound like they were written for you

Bypass the usual “red flag” checks we rely on

And it doesn’t stop there.

A North Korea–linked group, Famous Chollima, reportedly used AI-based voice and video tools to pass job interviews, hiding their true identity while gaining insider access.

Attacks are no longer “spray and pray.”

They’re tailored to you, your role, and your context, making them harder to spot and easier to fall for.

The question is - are your defences ready for that level of sophistication and personalisation?

0 votes
0 votes
0 votes
0 votes
0 votes

More Posts

I’m a Senior Dev and I’ve Forgotten How to Think Without a Prompt

Karol Modelskiverified - Mar 19

Agent Action Guard

praneeth - Mar 31

AI Reliability Gap: Why Large Language Models are not for Safety-Critical Systems

praneeth - Mar 31

AI Agents Don't Have Identities. That's Everyone's Problem.

Tom Smithverified - Mar 13

Defending Against AI Worms: Securing Multi-Agent Systems from Self-Replicating Prompts

alessandro_pignati - Apr 2
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

2 comments
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!