Your best developer could be a security risk, and AI is making threats harder to detect.

Your best developer could be a security risk, and AI is making threats harder to detect.

BackerLeader posted 3 min read

The Hidden Threat in Your Development Team: How AI is Changing Insider Risk Forever

At Black Hat 2025, I sat down with Lynsey Wolf, a human behavioral scientist at DTEX Systems, to discuss a topic that's reshaping cybersecurity in ways most developers haven't considered: insider risk management. What I learned challenged everything I thought I knew about workplace security and revealed how artificial intelligence is creating both unprecedented threats and innovative solutions.

The Evolution of Insider Threats

Traditional security has always focused on external threats; building higher walls and stronger firewalls. But DTEX takes a fundamentally different approach by recognizing that the most significant risks often come from within. "Every single issue, whether it be a breach, a compromise, whatever, it all comes down to a human inside," Wolf explained. "It all comes down to an insider."

This isn't just about malicious actors. Wolf categorizes insiders into three main groups: the highly technical malicious users, the somewhat malicious (bad actors who want to cause harm but lack sophistication), and the non-malicious majority who make dangerous mistakes out of convenience or negligence.

The Remote Worker Challenge

One of the most concerning trends Wolf highlighted is the difficulty in detecting sophisticated threat actors who present as high-performing remote employees. These individuals often appear to be model workers, completing tasks efficiently and maintaining low profiles, making them nearly impossible to identify through traditional security measures.

The challenge intensifies when these actors eventually turn malicious or when their true intentions are discovered. Red flags that security teams should monitor include unusual session patterns (such as logging in at unusual hours for extended periods, suggesting account sharing), accessing personal financial sites on company computers, and using non-approved AI tools during work hours.

AI: The Double-Edged Sword

Artificial intelligence is revolutionizing insider risk in two critical ways. First, it's making threat actors more sophisticated. "We don't need someone that's super technical anymore," Wolf explained. "They just go ask AI to do the technical work." This democratization of advanced attack capabilities means that previously rare, highly technical threats are becoming more common.

Second, AI is helping organizations detect and respond to these threats more effectively. DTEX's new Risk-Adaptive DLP solution uses behavioral analytics and AI-driven digital fingerprints to classify data based on user behavior and file attributes, not just content analysis.

What Developers Need to Know

For development teams, several behavioral indicators should trigger security alerts:

Job hunting patterns: While everyone looks for jobs, actively interviewing combined with privileged access creates risk profiles worth monitoring.

Personal stress factors: Divorce, financial problems, or family medical issues can increase vulnerability to social engineering or create motivation for malicious behavior.

Research patterns: Developers researching "how to archive SharePoint data" immediately after being placed on a performance improvement plan represents a clear escalation pathway.

AI tool usage: Using non-approved AI tools, especially for code development, creates both intellectual property and security risks.

The Technology Behind the Solution

DTEX's approach represents a significant shift toward proactive, behavior-based security. Their Risk-Adaptive Framework continuously learns from workforce behavior, automatically adjusting policies in real-time. The system can identify sensitive data across all file formats—videos, source code, and images—not just text documents.

Most importantly, the technology focuses on user intent, not just actions. By analyzing behavioral patterns that precede incidents, organizations can intervene before data exfiltration occurs rather than responding after the damage is done.

Implementation Challenges and Solutions

Rolling out insider risk management requires coordination across multiple departments: HR, legal, physical security, and cybersecurity teams. "There's no single team that can address it," Wolf emphasized. This cross-functional approach ensures investigations are legally compliant, technically sound, and aligned with business objectives.

The key is balancing robust protection with employee privacy through proportionate interventions based on risk levels rather than blanket surveillance.

Looking Forward

As remote work continues and AI capabilities expand, the insider threat landscape will only become more complex. Organizations that adopt proactive, behavior-based security measures now will be better positioned to protect their intellectual property and maintain operational security.

For developers, this means understanding that your coding practices, tool choices, and even personal circumstances contribute to your organization's overall security posture. The future of cybersecurity isn't just about writing secure code—it's about working in ways that support comprehensive risk management.

The message is clear: in an era where AI makes everyone more capable, security teams need to become more human-centric than ever before. The technology exists to make this transition smoother for developers while keeping organizations secure. The question is whether companies will adopt these solutions before they discover their insider risks the hard way.

If you read this far, tweet to the author to show them you care. Tweet a Thanks

Great article, thanks for sharing. Do you think companies can balance AI-driven insider monitoring with maintaining employee trust?

Great question! This is definitely one of the biggest challenges organizations face when implementing insider risk management. From my conversation with DTEX, I learned that the key is being transparent and proportionate.

The most successful approaches focus on behavioral patterns rather than invasive surveillance. For example, instead of reading every email, AI systems look for anomalies like unusual login times or accessing systems outside normal work patterns. It's about detecting intent and risk indicators, not micromanaging daily activities.

Transparency is crucial. Employees need to understand what's being monitored and why. When people know the system is designed to protect both the company and their colleagues from real threats (like the sophisticated actors we discussed), they're generally more accepting.

The technology also enables more targeted, privacy-respectful interventions. Rather than blanket restrictions, AI can identify specific risk scenarios and respond proportionally. Maybe provide additional training for someone showing negligent behavior rather than immediate disciplinary action.

I think the companies that succeed will be those that frame this as "workforce protection" rather than "employee surveillance." When implemented thoughtfully with clear policies and employee education, it can actually increase trust by showing the organization is serious about protecting everyone's work and careers from both external and internal threats.

More Posts

CrowdStrike rewrites security architecture with AI agents that code, hunt, and respond autonomously.

Tom Smith - Sep 16

API attacks that took 15 hours now happen in 15 minutes. Most security teams aren't keeping up.

Tom Smith - Aug 28

AI is making phishing attacks personalised and smarter

Nikhilesh Tayal - Sep 2

99% of Fortune 5000 companies disabled security controls to connect AI to enterprise data.

Tom Smith - Aug 4

Turning Your Ideas into Gold: Why The AI Alchemist Is About to Be Your Secret Weapon ♂️

Yash - Sep 15
chevron_left