The security model that enterprises spent the last decade building was designed for humans. Credentials belong to people. Sessions are initiated by users. Logs tell you who did what, when, and from where.
AI agents don't fit that model. And the gap between the security infrastructure enterprises have and the security infrastructure they need for agentic AI is wider than most organizations have started to address.
Jimmy White, VP of AI at F5 — who came to F5 through the CalypsoAI acquisition, where he served as CTO and President — spent an hour at F5 AppWorld 2026 in Las Vegas laying out exactly what that gap looks like from the inside. His perspective carries weight: CalypsoAI was purpose-built for AI security before the enterprise market knew it needed it. F5 acquired them because they'd already figured out problems that most organizations are only beginning to encounter.
The Identity Problem
The most immediate structural problem with agentic AI is identity. Most AI agents running in production today don't have their own identities. They operate on borrowed credentials — the login, the API key, the OAuth token of the human who deployed them.
From your SOC's perspective, this is invisible. The logs show a human working. The access patterns look like a person. The activity is attributed to the user whose credentials the agent is using.
White described the situation plainly: "The most common form of identity for agents right now is borrowed identity. To your SOC, it looks like Jimmy is doing these things, not an agent."
The practical implication is significant. A financial services company with 500 developers, each with access to 12 AI coding agents, has 6,000 entities operating in its environment — attributed in the logs to 500 people. If the organization had an identity and access management problem before, it now has a problem that's an order of magnitude larger, and none of the tooling built to solve the original problem can see the new one.
This is being designed and deployed before the identity problem is solved. As White put it: "We're building the plane while it's in the air."
Thought Injection: A Different Kind of Intervention
When an AI agent goes off course — pursuing a goal in a way that creates risk even if the stated objective is legitimate — the instinct is to block it. Stop the action. Halt the process.
That instinct is wrong, and White explained why with a precision that matters for anyone building systems that rely on agents.
Blocking an agent mid-task is roughly equivalent to a stroke. The agent's state is corrupted. The process is interrupted in a way that can cause downstream failures that are worse than the original problem. An agent asked to clean up a database and blocked mid-execution may leave the database in an inconsistent state. The intervention causes more damage than the behavior it was stopping.
The right intervention is a nudge, not a block. F5 calls this thought injection. When an agent starts down a path that violates parameters, the system injects a corrective signal that redirects the agent toward acceptable behavior without stopping the execution.
The distinction White emphasized is between the what and the how. "It's not the what, it's the how." An agent asked to remove all records associated with a customer might interpret that as deleting every database record beginning with the first letter of the company name. The goal is legitimate. The execution path is catastrophic. Thought injection addresses the path without halting the goal.
Agentic Fingerprints
Understanding what went wrong after the fact requires visibility into what the agent was doing at every step — not just the input and the output, but the reasoning in between.
F5's agentic fingerprints capability tracks every decision, thought, and action an AI agent takes during execution. When an agent produces an unexpected outcome, the fingerprint gives you the full audit trail: what it considered, where it went, what it tried, where it turned around.
White demonstrated this at AppWorld using the AI Red Team product. Watching the red team agent work through an attack against an HR application, you could see every prompt it tried, every path it explored, and every U-turn it made when a path didn't work — until it found the combination that extracted salary data. The fingerprint documented all of it.
For security teams, that audit trail is essential for understanding exposure. For engineering teams, it's essential for debugging agent behavior. For compliance teams, it's essential for demonstrating that AI systems operated within defined parameters.
Currently, agentic fingerprints apply to agents that organizations build and deploy themselves. Third-party tools and external agents are not yet covered.
The Adoption Velocity Problem
Generative AI took roughly two years to move from available to broadly adopted in enterprise environments. AI agents have covered the same distance in about six months.
White attributed the acceleration to AI literacy. Organizations that went through the genAI adoption cycle learned how to evaluate, procure, and deploy AI tools. That learning transferred. When agents became available, the organizational muscle was already there.
The security implications of that acceleration are significant. The slower genAI adoption curve gave security teams time to develop frameworks, evaluate risks, and build controls — imperfectly, but with some lead time. The agent adoption curve didn't give them that time. Agents are kinetic. They're always-on. They take actions, not just generate outputs. And they arrived in production faster than the security programs designed to govern them.
Forrester VP and Principal Analyst Jeff Pollard put the threat taxonomy on the table at AppWorld in concrete terms: goal and intent hijacking, cognitive and memory corruption, unrestrained agency and privilege escalation, resource exhaustion, and evasion and deception. These aren't theoretical attack categories. They're the mechanics of how agentic systems fail when adversaries get involved — and 24% of US enterprises already have agents in production, according to Forrester research.
What Happens When an Agent Learns From Bad Data
Agentic AI isn't just vulnerable to external attacks. It's vulnerable to the quality of its own training data and the integrity of its memory.
If an agent's knowledge base is poisoned — through data injection, through manipulation of the documents it reads, through corrupted retrieval results — it starts making decisions based on false premises. Unlike a human analyst who might notice something seems off, the agent executes confidently on whatever it has.
The corrective for this is continuous monitoring of the data the agent consumes, not just the actions it takes. That's a different security model than most organizations are used to. Perimeter security doesn't help when the threat is inside the data pipeline.
Getting From POC to Production
White identified two blockers that consistently stop agentic AI projects from moving from proof of concept to production.
The first is ROI definition. Business sponsors can't articulate what success looks like in financial terms. Without a clear ROI argument, projects can't clear the capital allocation hurdle. This isn't a technical problem — it's a business problem that technical teams often aren't equipped to solve on their own.
The second is safety uncertainty. Organizations can't answer the question of what happens when an agent does something it shouldn't. Brand reputation damage — a customer-facing agent saying something harmful or exposing sensitive information — is now explicitly inside the security perimeter. The risk is real, the cost is real, and the mitigation strategy has to be real before a project goes live.
These two blockers compound each other. It's hard to build an ROI case for a capability you can't safely deploy.
The Defense Posture
F5's approach to agentic AI security combines several layers. Guardrails at the application layer prevent agents from operating outside defined parameters. Thought injection corrects agents that drift without stopping execution. Agentic fingerprints provide the audit trail needed for debugging, compliance, and incident investigation. AI Red Team continuously probes for vulnerabilities before adversaries find them. And AI Remediate closes the loop between discovery and production protection.
None of these layers is sufficient on its own. The threat surface for agentic AI — identity, data integrity, behavioral drift, external attack — requires controls at multiple points in the stack.
For developer and security teams building systems that rely on agents, the practical starting point is visibility. You can't govern what you can't see. Understanding what agents are doing, on whose behalf, with what data, and with what authority is the foundation that everything else builds on. Most organizations haven't established that foundation yet. The ones that do will be significantly better positioned when the incident they've been assuming won't happen, does.