A curated list of links, references, books, videos, tutorials (Free or Paid), Exploit, CTFs, Hacking Practices, etc., which are related to GenAI, LLM, RAG, MCP, Agents, and Agentic AT security.
- GenAI Security Papers & Standards
- Books
- Videos
- Online Tutorials / Blogs / Presentations
- Online Courses (Paid/Free)
- Tools of Trade
- Security Practices and CTFs
- GenAI Security Breaches & Incidents
- Contributors
Important papers, standards, and checklists from organizations like OWASP, NIST, and others.
- OWASP Top 10 for LLM Application
- OWASP LLM AI Security and Governance Checklist
- OWASP Agentic AI Top 10
- NIST AI RMF Playbook
- NIST AI Risk Management Framework (AI RMF)
- NIST Adversarial Machine Learning
- Microsoft Failure Models in Machine Learning
- Microsoft Threat Modeling AI/ML
- OWASP GenAI Security Project
- AI Value Creators
- AI Engineering by Chip Huyen
- Designing Machine Learning Systems
- Hands-On Large Language Models
- Nexus by Yuval Noah Harari
- The Developer's Playbook for Large Language Model Security: Building Secure AI Applications by Steve Wilson
Articles and guides covering LLM, RAG, and general GenAI security.
- LLM Security
- What are foundational models?
- A quick check on the AI Threat Model
- Security Incident Response using LLM
- OWASP: CheatSheet – A Practical Guide for Securely Using Third-Party MCP Servers 1.0
- Riding the RAG Trail: Access, Permissions and Context
- Securing Risks with RAG Architectures
- Secure your RAG
- Mitigating Security Risks in Retrieval Augmented Generation (RAG)
- RAG: The Essential Guide
- Why RAG is revolutionising GenAI
- Stanford CS-324: Large Language Models
- Princeton COS 597G: Understanding Large Language Models
- Coursera: GenAI with LLM
- Coursera: Generative AI Engineering with LLMs Specialization
- Coursera: Generative AI for Cybersecurity Professionals (IBM)
- Coursera: AI for Cybersecurity (JHU)
- AttackIQ: The foundation of AI Security
Tools for defending, scanning, and auditing GenAI systems.
- LLM Guard - Information extraction and security for LLMs.
- Model Scan - Scanning models for serialization attacks.
- Rebuff - Prompt injection detection.
- NB Defense - Notebook security.
- Protect AI's OSS Portfolio
- LLM Guard Playground
Practice your skills with these vulnerable applications and challenges.
- Gandalf - Lakera AI - LLM security challenge.
- Prompt Airlines - AI security challenges, CTF style.
- Certified AI/ML Pentester Exam
- Damn Vulnerable MCP Server - Deliberately vulnerable MCP implementation.
- Vulnerable MCP Servers Lab - Collection of vulnerable servers.
- FinBot Agentic AI CTF - Agentic Security CTF.
- OWASP WrongSecrets LLM exercise
- Huntr.com - World’s first bug bounty platform for AI/ML.
Coming soon.
