mobile-banner
LLM Security & Red Teaming Masterclass
Batch 2

LLM Security & Red Teaming
Bootcamp

07-08 April 2026
07:00 PM - 11:00 PM (IST)

We don't have any bootcamps scheduled at the moment.

8 CPEs
Hands-On
Malware-Focused Analysis
Tool-driven
LLM Security & Red Teaming Masterclass
Request a Bootcamp
Why Attend?

Large Language Models (LLMs) are transforming industries, but with innovation comes new vulnerabilities and attack surfaces. This intensive 2-day Bootcamp blends foundational knowledge with practical red-teaming techniques, equipping you to test, defend, and secure AI systems using real-world adversarial strategies.

LLM BOotcamp

Hands-On Learning

Dive into guided labs simulating LLM attack and defense scenarios with real-world applications.

LLm Bootcamp

Top Industry Tools

Work with 15+ frameworks, including Hugging Face, LangChain, Cleverhans, ART, and more cutting-edge tools.

LLm Bootcamp

Step-by-Step Playbooks

Structured LLM attack and protection strategies ready for immediate application in your work.

LLm Bootcamp

Expert Practitioners

Learn from trainers actively delivering AI & cybersecurity programs globally with years of experience.

LLm Bootcamp

Professional Development

Earn 8 CPE credits to enhance your credentials in cybersecurity

Speakers Lineup
Avnish
Ashish
🔒 Limited Seats Available!

Secure Your Spot in the Future of AI Security

Don't miss this opportunity to master LLM security and red teaming techniques from industry experts. Join professionals worldwide in this comprehensive masterclass.

Bootcamp Agenda
Day 1 | 07 April

Introduction to AI and LLM Security by Avnish (7 PM - 11 PM)

  • Demystifying the core concepts and components of an AI system
  • Types of AI Systems: Machine Learning, Deep Learning, Generative AI, Agentic AI
  • Building and deploying AI - Model Development Lifecycle
  • Understanding LLMs: Transformer Architecture, Pre-training and Fine Tuning
  • LLM Applications: Chatbots, Code Generation, Cybersecurity Use Cases
  • AI and GenAI Frameworks: Scikit-learn, Tensorflow, AutoML, Hugging Face, LangChain, Llamaindex, OpenAI API, Ollama, LMStudio
  • Security Considerations while Developing and Deploying AI Systems
Day 2 | 08 April

AI and LLM Red Teaming by Ashish (7 PM - 11 PM)

  • Introduction to AI Red Teaming – What is it and why it is needed?
  • Attack Families for AI Red Teaming: Poisoning, Injection, Evasion, Extraction, Availability, Supply Chain
  • LLM01: Prompt Injection – Direct and Indirect
  • LLM02: Sensitive Information Disclosure – Data exfiltration
  • LLM03: Supply Chain – Malicious Packages and Models
  • LLM04: Data and Model Poisoning – Poisoning datasets and models during training and fine-tuning
  • LLM05: Improper Output Handling – Injection via model outputs
  • LLM06: Excessive Agency – Agents with dangerous privileges
  • LLM07: System Prompt Leakage – Exposing hidden system instructions through crafted queries
  • LLM08: Vector and Embedding Weaknesses
  • LLM09: Misinformation – Detecting Hallucinations
  • LLM10: Unbounded Consumption – Resource abuse and DOS Attacks
  • Tools and Frameworks for LLM Red Teaming: Cleverhans, Foolbox, Adversarial Robustness Toolbox

*Note: No access to recorded sessions will be shared for this bootcamp.

Key Takeaways
8 CPE Credits Issued on Completion
Interactive Red Team Labs
Attack & Defense Playbooks
Actionable Security Techniques
Expert Guidance & Mentorship
15+ Cutting-Edge AI Tools

Interested in Joining the

Bootcamp?

Please Fill the Form

Our advisor will contact you with event details, and exclusive offers!