Back to News
Open Source

Cogensec CTO Releases Open-Source LLM Security Guide

Comprehensive security reference covers the latest OWASP Top 10 for LLMs 2025 and the brand new Agentic Applications framework released at Black Hat Europe

Cogensec Co-Founder and CTO Tarique Smith has released a comprehensive open-source security guide for Large Language Models and Agentic AI systems. The LLM Security Guide provides security professionals and developers with actionable resources for understanding and mitigating the latest AI security threats.

The guide arrives at a critical moment for AI security, coinciding with the release of the OWASP Top 10 for Agentic Applications 2026 at Black Hat Europe—making it one of the first resources to comprehensively cover both LLM and Agentic security frameworks.

What's Covered

The LLM Security Guide is structured as a comprehensive reference covering the full spectrum of AI security concerns:

  • OWASP Top 10 for LLMs 2025: Complete coverage of the latest vulnerabilities including prompt injection, insecure output handling, training data poisoning, and supply chain vulnerabilities
  • OWASP Top 10 for Agentic Applications 2026: The newly released framework addressing agent-specific threats like excessive agency, tool poisoning, and multi-agent coordination vulnerabilities
  • Offensive Security Tools: Curated collection of red team tools for testing LLM and agent security
  • Defensive Security Tools: Production-ready tools for monitoring, guardrails, and runtime protection
  • Vulnerability Databases: References to CVE databases and vulnerability tracking resources specific to AI systems

OWASP Agentic AI Focus

With the recent release of the OWASP Top 10 for Agentic Applications at Black Hat Europe, the guide provides timely coverage of emerging agentic security threats:

  • Excessive Agency: When AI agents are granted more capabilities or permissions than necessary for their intended function
  • Unexpected Agent Actions: Agents performing actions outside their expected operational boundaries
  • Tool/Function Calling Vulnerabilities: Security gaps in how agents interact with external tools and APIs
  • Multi-Agent Coordination Risks: Vulnerabilities arising from agent-to-agent communication and collaboration
  • Memory and State Manipulation: Attacks targeting agent memory systems to influence behavior

Community Contribution

"AI security cannot be solved by any single company. As agents become more autonomous and interconnected, the attack surface grows exponentially. By open-sourcing this guide, we're giving the security community a shared foundation to build upon. The faster we collectively raise our understanding, the safer these systems become for everyone."
— Tarique Smith, Co-Founder & CTO at Cogensec

The guide is hosted on GitHub and welcomes contributions from the security community. Tarique encourages researchers, practitioners, and developers to submit pull requests with new tools, techniques, and resources as the AI security landscape evolves.

Why This Matters

Enterprise adoption of LLMs and agentic AI systems continues to accelerate, but security practices often lag behind deployment velocity. The LLM Security Guide addresses this gap by providing:

  • A single reference point for understanding AI-specific security risks
  • Practical tools for both offensive testing and defensive implementation
  • Framework-agnostic guidance applicable across different AI platforms
  • Continuously updated content reflecting the rapidly evolving threat landscape
Share: