AI in SecOps Bootcamp

Building & Breaking LLMs in Practice
25-26 April 2026
10:00 AM - 02:00 PM (IST)

We don't have any bootcamps scheduled at the moment.

Why Attend?

AI is transforming cybersecurity workflows, from OSINT and GRC drafting to detection engineering and vulnerability analysis. But the same AI systems can be abused, manipulated, and attacked. This hands-on bootcamp teaches you how to use AI effectively in security operations, and how to identify, exploit, and defend against AI weaknesses.

What sets this training apart:
Dual Perspective
Learn both AI productivity and AI exploitation.
Real World Insights
Real prompts, real attacks, real misuse cases.
Hands-On Demonstrations
Real prompts, real attacks, real misuse cases.
Security-Centric Approach
Built specifically for cybersecurity professionals.
Practical Over Theory
Immediate real-world application.
Red & Blue Team Insights
Understand AI from attacker and defender angles.
Meet the Expert

Urvesh

6+ Years of Experience

DFIR, Threat Hunting & Intel | CHFI | eTHP | DCPLA | CTIA | ECIH | CND | CCSE

Urvesh is an experienced cybersecurity professional specializing in threat detection, incident response, and SOC operations. He has deployed and managed SIEM/XDR platforms, built custom detection rules, and conducted advanced threat hunting. Urvesh has trained 300+ professionals globally, helping teams enhance detection, response, and forensic investigation capabilities.

Special Offer!
25-26 April 2026
10:00 AM - 02:00 PM (IST)
Bootcamp Agenda
Day 1

Module 1: Introductions

  • Introduction to AI
  • The AI Bubble
  • LLMs, Prompts, Agents

Module 2: Using AI for Security Tasks

  • Using AI for OSINT
  • Using AI for GRC Drafting
  • Using AI for Vulnerability Scanning
  • Using AI for detection engineering
Day 2

Module 3: Breaking AI

  • Jailbreaking in AI
  • Uncensored LLMs
  • OWASP Top 10 for LLM
    • Sensitive Data Exposure
    • Improper Output Handling
    • Other OWASP LLM vulnerabilities
  • Prompt Injection Attacks
  • Multiturn Attacks
  • LLM Model DOS Attack
  • Training Poisoned Data
  • MITRE ATLAS

*Note: No access to recorded sessions will be shared for this bootcamp.

Key Takeaways
Earn 8 CPE Credits
Apply AI for practical security tasks
Understand LLM architecture, models, and risks
Execute prompt injection and multi-turn attacks
Explore AI jailbreaks and uncensored models
Identify and mitigate OWASP LLM vulnerabilities
Implement defenses for secure AI deployments
Work on Latest Tools like Nmap, Claude, Sigma, Ollama & more

Interested in Joining the

Bootcamp?

Please Fill the Form

Our advisor will contact you with event details, and exclusive offers!