The Hidden Threat in Your Development Team: How AI is Changing Insider Risk Forever
At Black Hat 2025, I sat down with Lynsey Wolf, a human behavioral scientist at DTEX Systems, to discuss a topic that's reshaping cybersecurity in ways most developers haven't considered: insider risk management. What I learned challenged everything I thought I knew about workplace security and revealed how artificial intelligence is creating both unprecedented threats and innovative solutions.
The Evolution of Insider Threats
Traditional security has always focused on external threats; building higher walls and stronger firewalls. But DTEX takes a fundamentally different approach by recognizing that the most significant risks often come from within. "Every single issue, whether it be a breach, a compromise, whatever, it all comes down to a human inside," Wolf explained. "It all comes down to an insider."
This isn't just about malicious actors. Wolf categorizes insiders into three main groups: the highly technical malicious users, the somewhat malicious (bad actors who want to cause harm but lack sophistication), and the non-malicious majority who make dangerous mistakes out of convenience or negligence.
The Remote Worker Challenge
One of the most concerning trends Wolf highlighted is the difficulty in detecting sophisticated threat actors who present as high-performing remote employees. These individuals often appear to be model workers, completing tasks efficiently and maintaining low profiles, making them nearly impossible to identify through traditional security measures.
The challenge intensifies when these actors eventually turn malicious or when their true intentions are discovered. Red flags that security teams should monitor include unusual session patterns (such as logging in at unusual hours for extended periods, suggesting account sharing), accessing personal financial sites on company computers, and using non-approved AI tools during work hours.
AI: The Double-Edged Sword
Artificial intelligence is revolutionizing insider risk in two critical ways. First, it's making threat actors more sophisticated. "We don't need someone that's super technical anymore," Wolf explained. "They just go ask AI to do the technical work." This democratization of advanced attack capabilities means that previously rare, highly technical threats are becoming more common.
Second, AI is helping organizations detect and respond to these threats more effectively. DTEX's new Risk-Adaptive DLP solution uses behavioral analytics and AI-driven digital fingerprints to classify data based on user behavior and file attributes, not just content analysis.
What Developers Need to Know
For development teams, several behavioral indicators should trigger security alerts:
Job hunting patterns: While everyone looks for jobs, actively interviewing combined with privileged access creates risk profiles worth monitoring.
Personal stress factors: Divorce, financial problems, or family medical issues can increase vulnerability to social engineering or create motivation for malicious behavior.
Research patterns: Developers researching "how to archive SharePoint data" immediately after being placed on a performance improvement plan represents a clear escalation pathway.
AI tool usage: Using non-approved AI tools, especially for code development, creates both intellectual property and security risks.
The Technology Behind the Solution
DTEX's approach represents a significant shift toward proactive, behavior-based security. Their Risk-Adaptive Framework continuously learns from workforce behavior, automatically adjusting policies in real-time. The system can identify sensitive data across all file formats—videos, source code, and images—not just text documents.
Most importantly, the technology focuses on user intent, not just actions. By analyzing behavioral patterns that precede incidents, organizations can intervene before data exfiltration occurs rather than responding after the damage is done.
Implementation Challenges and Solutions
Rolling out insider risk management requires coordination across multiple departments: HR, legal, physical security, and cybersecurity teams. "There's no single team that can address it," Wolf emphasized. This cross-functional approach ensures investigations are legally compliant, technically sound, and aligned with business objectives.
The key is balancing robust protection with employee privacy through proportionate interventions based on risk levels rather than blanket surveillance.
Looking Forward
As remote work continues and AI capabilities expand, the insider threat landscape will only become more complex. Organizations that adopt proactive, behavior-based security measures now will be better positioned to protect their intellectual property and maintain operational security.
For developers, this means understanding that your coding practices, tool choices, and even personal circumstances contribute to your organization's overall security posture. The future of cybersecurity isn't just about writing secure code—it's about working in ways that support comprehensive risk management.
The message is clear: in an era where AI makes everyone more capable, security teams need to become more human-centric than ever before. The technology exists to make this transition smoother for developers while keeping organizations secure. The question is whether companies will adopt these solutions before they discover their insider risks the hard way.