AI is making phishing attacks personalised and smarter

Leader posted Originally published at www.linkedin.com 1 min read

For Example:

Traditional Phishing: “Dear Customer, your bank account is locked. Click here to reset.”

Targeted phishing: “Hi Nikhilesh, this is Rahul from Winsaga Edutech’s partner team. Here’s the updated contract you requested yesterday, please review and sign.” (actually a malware link)

This isn’t random spam.

Hackers are now using AI to:

Scan your digital footprint (LinkedIn posts, company websites, even old resumes)

Craft emails that sound like they were written for you

Bypass the usual “red flag” checks we rely on

And it doesn’t stop there.

A North Korea–linked group, Famous Chollima, reportedly used AI-based voice and video tools to pass job interviews, hiding their true identity while gaining insider access.

Attacks are no longer “spray and pray.”

They’re tailored to you, your role, and your context, making them harder to spot and easier to fall for.

The question is - are your defences ready for that level of sophistication and personalisation?

If you read this far, tweet to the author to show them you care. Tweet a Thanks
0 votes
0 votes
0 votes
0 votes
0 votes

More Posts

Your best developer could be a security risk, and AI is making threats harder to detect.

Tom Smith - Aug 19

Microsoft Agent Framework Series: Create Your First AI Agent in C# & AZ Open AI

TheCodeStreet - Oct 14

AutoReviewer AI – Daily Digest for AI Researchers using Runner H

Rahul Bhave - Jul 12

How I Built PromptBank: The AI-Powered Bank That Lets You Yell at Your Money (And It Listens)

Fred - Oct 17

The Future of AI Automation in the Military: Warfare by 2030

Fred - Oct 4
chevron_left