LLM Security & Red Teaming
Bootcamp
We don't have any bootcamps scheduled at the moment.











Large Language Models (LLMs) are transforming industries, but with innovation comes new vulnerabilities and attack surfaces. This intensive 2-day Bootcamp blends foundational knowledge with practical red-teaming techniques, equipping you to test, defend, and secure AI systems using real-world adversarial strategies.
Hands-On Learning
Dive into guided labs simulating LLM attack and defense scenarios with real-world applications.
Top Industry Tools
Work with 15+ frameworks, including Hugging Face, LangChain, Cleverhans, ART, and more cutting-edge tools.
Step-by-Step Playbooks
Structured LLM attack and protection strategies ready for immediate application in your work.
Expert Practitioners
Learn from trainers actively delivering AI & cybersecurity programs globally with years of experience.
Professional Development
Earn 8 CPE credits to enhance your credentials in cybersecurity
Secure Your Spot in the Future of AI Security
Don't miss this opportunity to master LLM security and red teaming techniques from industry experts. Join professionals worldwide in this comprehensive masterclass.
Introduction to AI and LLM Security by Avnish
(7 PM - 11 PM)
- Demystifying the core concepts and components of an AI system
- Types of AI Systems: Machine Learning, Deep Learning, Generative AI, Agentic AI
- Building and deploying AI - Model Development Lifecycle
- Understanding LLMs: Transformer Architecture, Pre-training and Fine Tuning
- LLM Applications: Chatbots, Code Generation, Cybersecurity Use Cases
- AI and GenAI Frameworks: Scikit-learn, Tensorflow, AutoML, Hugging Face, LangChain, Llamaindex, OpenAI API, Ollama, LMStudio
- Security Considerations while Developing and Deploying AI Systems
AI and LLM Red Teaming by Ashish (7 PM - 11
PM)
- Introduction to AI Red Teaming – What is it and why it is needed?
- Attack Families for AI Red Teaming: Poisoning, Injection, Evasion, Extraction, Availability, Supply Chain
- LLM01: Prompt Injection – Direct and Indirect
- LLM02: Sensitive Information Disclosure – Data exfiltration
- LLM03: Supply Chain – Malicious Packages and Models
- LLM04: Data and Model Poisoning – Poisoning datasets and models during training and fine-tuning
- LLM05: Improper Output Handling – Injection via model outputs
- LLM06: Excessive Agency – Agents with dangerous privileges
- LLM07: System Prompt Leakage – Exposing hidden system instructions through crafted queries
- LLM08: Vector and Embedding Weaknesses
- LLM09: Misinformation – Detecting Hallucinations
- LLM10: Unbounded Consumption – Resource abuse and DOS Attacks
- Tools and Frameworks for LLM Red Teaming: Cleverhans, Foolbox, Adversarial Robustness Toolbox
*Note: No access to recorded sessions will be shared for this bootcamp.
Interested in Joining the
Our advisor will contact you with event details, and exclusive offers!