Nice one! Could you also write a post on the different ways security can be breached through user inputs? That would be super helpful.
Based on my conversations at Black Hat 2025, this post hits on critical practices, but there's a gap between knowing what to do and actually doing it that's becoming more pronounced with AI-powered development.
Randall Degges from Snyk put it bluntly: "Developers basically don't think about security at all. Zero." The challenge isn't that developers don't care—it's that security often conflicts with the velocity that modern development demands.
What's changing the game is how AI code generation is amplifying both the problem and the solution. Developers are using AI to write code faster than ever, but that code inherits the security patterns (or lack thereof) from its training data. However, Snyk's "Secure at Inception" platform shows how AI can also make security completely transparent to developers by automatically scanning dependencies, analyzing code files, and fixing vulnerabilities as AI generates code.
Your point about dependency scanning is especially critical. Every defense industrial base company that Snehal Antani's team at Horizon3.ai compromised was compliant with security frameworks and had conducted annual penetration tests. But as he emphasized, "Just because you're compliant doesn't mean you're secure."
The tools you've listed are solid. The real breakthrough is integrating security into the development workflow rather than adding it afterward. When security becomes friction, developers find ways around it. When it's invisible and automated, it actually happens.
One additional consideration: with AI generating more code, we need to shift from reactive security scanning to proactive security by design. The old model of "build first, scan later" doesn't scale when AI can generate thousands of lines of code in minutes.
The future belongs to platforms that make secure coding the default path, not the extra step.