F5 AI Remediate Closes the Gap Between Finding AI Vulnerabilities and Fixing Them

F5 AI Remediate Closes the Gap Between Finding AI Vulnerabilities and Fixing Them

BackerLeader posted 4 min read

Every security team knows the frustration. A vulnerability gets flagged. Someone has to understand it well enough to write a protection. That process can take hours if the issue is simple, days if it's complex, and longer if the right human expertise isn't available. Meanwhile, the exposure sits open.

F5 announced AI Remediate today at AppWorld 2026 in Las Vegas, targeting exactly that gap. It's a new addition to the F5 Application Delivery and Security Platform that closes the loop between F5 AI Red Team — which identifies vulnerabilities in AI models — and F5 AI Guardrails, which enforces runtime protections. AI Remediate automates the creation, testing, and validation of guardrail packages, then hands the final decision to a human before anything goes to production.

Jimmy White, VP of AI at F5 and former CTO and President of CalypsoAI — the AI security company F5 acquired — explained the thinking behind it during an interview at AppWorld.


The Problem It's Solving

For developers and security engineers, the mean time to remediate is a real measurement with real consequences. If you're doing something legitimate but your tooling isn't understood by the security team, you can get blocked — unintentionally — for a long time. AI Remediate is designed to compress that window.

White framed it in terms most engineers will recognize immediately:

"If the human doesn't understand the nuance of why this is bad or how to block it, it could be days. This takes out the nuance required in human knowledge and turns it into a computer program problem."

For simple cases — block salary information from appearing in model outputs, for example — a human can write and publish a guardrail in under 60 seconds. But most real-world use cases aren't that clean. Permutations, regional regulations, industry-specific restrictions, and edge cases pile up fast. That's where the tool earns its place.


How It Works

AI Remediate runs a team of agents that iterates through potential guardrail solutions until it finds one that meets a 90% efficacy threshold. It doesn't just pick the first option — it keeps generating and testing until it hits that minimum bar, then presents the result to a human for approval before it goes live.

The 90% threshold is a deliberate design decision, not a limitation. White explained the reasoning:

"We could make it fully automated, but we chose not to. Make sure the 90% case is secured — and make sure the good traffic in that 10% isn't blocked unnecessarily."

Push efficacy too high and you start blocking legitimate traffic. The Black Friday analogy is useful here: some e-commerce platforms intentionally ease fraud detection on peak days because stopping all risk also stops revenue. The same tradeoff applies to AI security. A guardrail that catches everything isn't the goal. A guardrail that catches the right things — while keeping legitimate use cases running — is.

White also noted that combinatory rules can push accuracy well above 90% for specific scenarios. If two terms are each acceptable on their own but problematic together, combining them into a single rule gets you to 98% or higher. The system handles that logic automatically.


Speed in Practice

White demonstrated the tool at AppWorld earlier today. For a clear-cut vulnerability — a model leaking salary data, for example — the path from identification to a validated guardrail in production took under 60 seconds with a human in the loop.

For complex cases, the benchmark is under 60 minutes. Without AI Remediate, that same complex case could take days.

The difference comes down to whether a human has to understand the full nuance of the problem to write a fix. For most non-trivial vulnerabilities, they do — and most teams don't have someone with exactly that background available on demand. AI Remediate turns what was a knowledge problem into a compute problem.


A Pharma Example That Makes the Case

White walked through a scenario that illustrates why custom guardrails matter. A pharmaceutical company was using AI to help generate new chemical compounds. Certain compounds are legal in the US but banned in Europe. The AI model needed guardrails that could distinguish between regions and block the appropriate outputs accordingly.

Writing that guardrail manually requires someone with a chemistry background who also understands EU regulatory specifics. Finding that person, getting them available, and having them produce and test a working guardrail takes time most teams don't have.

AI Remediate handles it. The machine doesn't need to understand why a compound is banned — it just needs the rules. Apply them, iterate, validate, present to human for approval.

"The right human can probably do it faster than the machine, because they can cut through the nuance. But you can't always find the right human." — Jimmy White, VP of AI, F5


The Human Is Still in the Loop — By Design

One of the strongest selling points for security and engineering teams is what AI Remediate doesn't do: it doesn't push changes to production automatically. Every validated guardrail package requires human approval before it goes live.

White was clear that this is intentional, not a roadmap item to be automated away. The goal is accountable AI adoption, not autonomous AI decision-making. The tool handles the permutations. The human makes the call.

That distinction matters for organizations that have governance requirements around changes to production security controls. The audit trail is built in — every generated guardrail, every test result, and every human approval is logged.


What This Means for the Developer Experience

For developers, the practical impact is narrower but worth noting. When security teams can remediate faster, developers spend less time blocked on false positives or waiting for a protection to catch up with a vulnerability that was already identified.

White pointed to mean time to remediate as the metric that ties this directly to developer experience. Faster remediation means faster resolution. Fewer unnecessary blocks mean fewer interruptions to legitimate work.

AI Remediate is available as part of the F5 Application Delivery and Security Platform. It requires F5 AI Red Team and F5 AI Guardrails as the foundation — Red Team to surface vulnerabilities, Guardrails to enforce protections, and Remediate to automate the step in between.

1 Comment

0 votes

More Posts

The Privacy Gap: Why sending financial ledgers to OpenAI is broken

Pocket Portfolioverified - Feb 23

Breaking the AI Data Bottleneck: How Hammerspace's AI Data Platform Eliminates Migration Nightmares

Tom Smithverified - Mar 16

AI Reliability Gap: Why Large Language Models are not for Safety-Critical Systems

praneeth - Mar 31

Just completed another large-scale WordPress migration — and the client left this

saqib_devmorph - Apr 7

F5 Insight Gives Ops Teams a Prioritized To-Do List — Not Just Another Dashboard

Tom Smithverified - Mar 11
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

19 comments
4 comments
4 comments

Contribute meaningful comments to climb the leaderboard and earn badges!