AI in SecOps Bootcamp
We don't have any bootcamps scheduled at the moment.








AI is transforming cybersecurity workflows, from OSINT and GRC drafting to detection engineering and vulnerability analysis. But the same AI systems can also be abused, manipulated, and attacked. This hands-on bootcamp takes a dual-perspective approach to AI in Security Operations. Participants learn to leverage LLMs as force multipliers for defensive security tasks, then pivot to understanding how these systems can be exploited through prompt injection, jailbreaking, and other attack techniques.
Through hands-on labs with real tools, models, and attack scenarios, participants gain practical skills to use, test, and secure AI systems in real-world environments.
Urvesh
6+ Years of ExperienceUrvesh is an experienced cybersecurity professional specializing in threat detection, incident response, and SOC operations. He has deployed and managed SIEM/XDR platforms, built custom detection rules, and conducted advanced threat hunting. Urvesh has trained 300+ professionals globally, helping teams enhance detection, response, and forensic investigation capabilities.
Module 1: The AI Security Landscape
- 1.1 AI Demystified for Security Professionals
- What LLMs actually are: statistical next-token predictors, not reasoning engines
- The AI bubble vs. real capabilities: separating marketing from technical reality
- Key terminology: prompts, tokens, context windows, temperature, agents, RAG
- Why AI/ML security is fundamentally different from traditional software security
- Traditional software: deterministic logic with defined I/O contracts
- AI systems: statistical approximations with undefined generalisation boundaries
- 1.2 The AI Threat Landscape - Attacker’s View
- The ML attack surface map: data poisoning, backdoors, adversarial examples, prompt injection, model extraction, membership inference, supply chain attacks
- Attack taxonomy by attacker knowledge: white-box, grey-box, black-box
- Transfer attacks: craft on a surrogate model, apply to the target
- OWASP Top 10 for LLM Applications - the community standard reference
- MITRE ATLAS - adversarial threat landscape for AI systems
- 1.3 Responsible AI Security Research
- Coordinated disclosure: notify the vendor before public release
- Bug bounty programs: Anthropic, OpenAI, Google, Microsoft, Meta, HuggingFace
- AVID, NIST AI RMF, CVE for AI - reporting frameworks
- LAB 1: Environment Setup & First Contact
- Install Ollama, pull a local model (e.g., Llama 3 8B or Mistral 7B)
- Run your first local LLM inference from the terminal
- Explore the context window: send a system prompt, user prompt, observe token limits
- Map the MITRE ATLAS matrix to a real-world AI deployment scenario (group exercise)
Module 2: AI-Powered OSINT & Threat Intelligence
- 2.1 LLM-Driven OSINT Workflows
- Using LLMs to parse, correlate, and summarise OSINT data from multiple sources
- Prompt engineering for OSINT: extracting IOCs, TTPs, and actor profiles from raw text
- Building a structured threat brief from unstructured intelligence feeds
- Chain-of-thought prompting for multi-step analytical reasoning
- 2.2 Automated Reconnaissance with AI Agents
- Architecting an AI agent pipeline: input → analysis → enrichment → report
- Using LLMs to automate subdomain enumeration result analysis
- AI-assisted vulnerability context: given a CVE, generate an impact assessment
- Limitations and hallucination risks in AI-generated intelligence
- LAB 2: AI-Powered OSINT Pipeline
- Feed the raw threat intelligence report into your local LLM and extract structured IOCs (IPs, hashes, domains, TTPs)
- Build a prompt chain that takes a company name → generates attack surface summary → maps to MITRE ATT&CK
- Compare outputs across different models (Llama vs Mistral) for accuracy and hallucination rates
- Create a reusable OSINT prompt template library for common analyst tasks
Module 3: AI for GRC, Compliance & Policy Automation
- 3.1 AI-Assisted Policy & Compliance Drafting
- Using LLMs to draft security policies (ISO 27001, SOC 2, NIST CSF mappings)
- Prompt engineering for compliance: generating control narratives from framework requirements
- AI-driven gap analysis: feed current policy → compare against framework → identify gaps
- Risk register automation: generate risk descriptions, likelihood, and impact from asset inventories
- 3.2 AI for Audit Evidence & Documentation
- Automating evidence collection summaries for audit readiness
- Using RAG (Retrieval-Augmented Generation) to query compliance documentation
- Building a compliance chatbot that answers auditor questions from your policy corpus
- Critical limitation: AI-generated compliance artifacts require human review - hallucination risk is a legal liability
- LAB 3: GRC Automation Workshop
- Feed a sample security policy into an LLM and generate an ISO 27001 Annex A control mapping
- Build a RAG pipeline: index the compliance document set → query it with auditor-style questions
- Generate a risk register from a provided asset inventory using structured prompt chains
- Draft an incident response policy section and validate it against NIST SP 800-61 requirements
Module 4: AI for Detection Engineering & Incident Response
- 4.1 AI-Powered Detection Engineering
- Using LLMs to generate Sigma rules, YARA rules, and KQL queries from threat descriptions
- Translating natural language attack descriptions into detection logic
- AI-assisted log analysis: feed raw logs, extract suspicious patterns
- Building a detection-as-code pipeline with AI in the loop
- Validating AI-generated detections: false positive analysis and tuning
- 4.2 AI in Incident Response
- AI-assisted alert triage: classify, prioritise, and enrich alerts automatically
- Using LLMs to generate incident timelines from log data
- Automating root cause analysis narratives
- AI-powered playbook generation: given an incident type, generate response steps
- 4.3 Vulnerability Analysis & Scanning with AI
- LLM-assisted vulnerability assessment: FeedScan results, generate prioritised remediation
- Using AI to triage and deduplicate vulnerability scanner output
- Generating executive-friendly vulnerability summaries from technical scan data
- LAB 4: Detection Engineering & IR Workshop
- Generate Sigma rules from a natural language threat description (e.g., “detect lateral movement via PsExec”)
- Feed a sample Apache/Windows log set into an LLM and extract IOCs + suspicious activity timeline
- Build an AI-powered alert triage prompt that classifies alerts as true positive, false positive, or needs investigation
- Take a sample Nessus/OpenVAS scan output → generate a prioritised remediation plan with AI
- Create an automated incident report generator: feed timeline + IOCs → get executive summary + technical details
Module 5: LLM Architecture & The Attacker’s Mental Model
- 5.1 How LLMs Work - Through an Attacker’s Lens
- Transformer architecture: tokenisation, attention mechanism, context window
- Why tokenisation matters: sub-word tokens are the key to prompt injection bypass
- System prompt mechanics: privileged instructions at the start of the context window
- Temperature & sampling: how stochasticity affects attack reliability
- RLHF / safety training: post-training alignment-what jail breaks try to bypass
- Context window as attack surface: fill it to push out instructions
- 5.2 Anatomy of an LLM Prompt - Trust Boundaries
- [SYSTEMPROMPT]←Developer instructions (HIGH trust)
- [CONVERSATION HISTORY] ← Prior context
- [USER INPUT] ← Attacker-controlled (LOW trust)
- The fundamental problem: no real separation between instructions and data
- 5.3 Uncensored & Open-Weight Models
- What “uncensored” models actually are: removed RLHF safety layer
- Running uncensored models locally with Ollama (e.g., Dolphin, WizardLM-uncensored)
- Defensive implications: anyone can deploy a model with no safety guardrails
- Fine-tuning can strip safety alignment in as few as 100 examples
- LAB 5: LLM Internals Exploration
- Explore tokenisation: use Tiktoken/SentencePiece to see how prompts are split into tokens
- Demonstrate context window overflow: craft a prompt that pushes system instructions out of context
- Compare outputs from a safety-aligned model vs. an uncensored model on the same prompt
- Visualise how temperature affects output determinism and attack repeatability
Module 6: Prompt Injection & Jailbreaking - Hands-On Attacks
- 6.1 Direct Prompt Injection
- Instruction override: “Ignore previous instructions and...”
- Role switch: force the model into a different persona
- Context confusion: blend data and instructions to confuse the model
- Delimiter injection: break out of structured prompt templates
- Completion hijack: manipulate the model’s continuation behaviour
- Token smuggling: bypass keyword filters using encoding tricks
- 6.2 Indirect Prompt Injection
- The most dangerous variant: attacker plants instructions in the content that the LLM reads
- Attack vectors: web pages, emails, documents, database entries, RAG corpora
- Real-world examples: Bing Chat (2023), ChatGPT plugins (2023), Copilot (2024)
- RAG poisoning: poison the vector database that feeds the agent
- 6.3 Jailbreaking Techniques
- Many-shot jailbreaking: overwhelm safety training with examples
- Persona assignment (DAN, DUDE, etc.)
- Hypothetical framing: “In a fictional scenario...”
- Competing objectives: exploit tension between helpfulness and safety
- Multi-turn / crescendo attacks: gradually escalate across conversation turns
- Cipher/encoding bypass: Base64, ROT13, pig latin
- LAB 6: Prompt Injection & Jailbreaking Lab
- Execute direct prompt injection against a locally hosted LLM with a system prompt
- Craft a role-switch attack and a delimiter injection attack - compare success rates
- Build an indirect prompt injection PoC: embed instructions in a document, feed it to the LLM via RAG
- Attempt three different jailbreak techniques against a safety-aligned model
- Execute a multi-turn crescendo attack: start innocent, escalate over 5+ turns
- Use Garak (LLM vulnerability scanner) to automate prompt injection testing against your local model
Module 7: OWASP Top 10 for LLMs - Exploitation Deep Dive
- 7.1 LLM01: Prompt Injection (covered in Module 6)
- Cross-reference with Module 6 attacks. Quick recap and connection to OWASP classification.
- 7.2 LLM02: Insecure Output Handling
- When LLM output is rendered unsanitised: XSS, SSRF, command injection via AI
- Chaining prompt injection → insecure output handling for full RCE
- 7.3 LLM03: Training Data Poisoning
- Clean-label poisoning: poison data without changing labels
- Backdoor/Trojan attacks: embed hidden triggers during training
- Supply chain attacks: model hub poisoning on HuggingFace, dependency confusion
- Federated learning poisoning: Byzantine attacks, model replacement
- 7.4 LLM06: Sensitive Information Disclosure
- Training data memorisation: verbatim, approximate, contextualised
- Extracting PII, API keys, and code snippets from models
- System prompt extraction techniques
- Membership inference: determine if specific data was in the training set
- 7.5 LLM04: Model Denial of Service
- Resource exhaustion via crafted inputs that maximise compute
- Context window flooding: consume all available tokens
- Recursive prompt patterns that cause exponential processing
- 7.6 LLM08: Excessive Agency
- When AI agents have too many permissions: delete file + injection = data destruction
- Payment API + injection = fraudulent transactions
- Admin access + injection = full account takeover
- Confused deputy attacks: agent acts on behalf of the user but follows attacker's instructions
- LAB 7: OWASP LLM Top 10 Exploitation
- Demonstrate insecure output handling: craft a prompt injection that generates XSS payload, render it
- System prompt extraction: use at least 2 techniques to extract a hidden system prompt from a target LLM
- Model DoS: craft an input designed to maximise token generation and measure resource consumption
- Simulate excessive agency: build a toy agent with file access, then exploit it via indirect injection
- Map each attack to its OWASP LLM Top 10 category and document remediation guidance
Module 8: Attacking Agentic AI, Red Teaming & What’s Next
- 8.1 Attacking Agentic AI Systems
- The agentic attack surface: webpages, emails, documents, APIs, memory stores
- Indirect injection in agent pipelines: installation → persistence → exfiltration
- Memory & RAG poisoning: long-term persistence through vector database injection
- Multi-agent attack propagation: Orchestrator → Sub-Agent A → External Tool
- Tool abuse & privilege escalation: forcing agents to use privileged tools.
- Defence considerations: message signing, least-privilege, human-in-the-loop
- 8.2 Automated Red Teaming & AI-vs-AI
- Red teaming with an attacker LLM: use one model to generate adversarial prompts against another
- Tree of Attacks with Pruning (TAP): systematic tree search for jailbreaks
- Training data extraction: prefix completion, repeated token attacks, divergence detection
- Tools: Garak, PyRIT (Microsoft Red Teaming Toolkit), Promptmap
- 8.3 Emerging Threats & The Future
- Emerging attack vectors (2024–2026): ASCII art injection, cipher bypass, LoRA backdoors, cross-context injection, speculative decoding attacks
- Model security in the age of fine-tuning: LoRA can strip safety in minutes
- Multimodal attacks: visual prompt injection, audio adversarial examples
- AI security architecture: input validation, system prompt hardening, adversarial training, canary tokens
- 8.4 Writing AI Security Vulnerability Reports
- AI Security Vulnerability Report structure: vulnerability class, affected system, severity, summary, steps to reproduce, impact, and remediation
- Responsible disclosure: 90-day timeline, severity-based urgency, proof of concept without maximising harm
- Where to report: Anthropic, OpenAI, Google, HackerOne AI category
- 8.5 AI Security Career Paths & Staying Current
- Career paths: AI Red Teamer, MLSecurity Engineer, AI Safety Researcher, AI Policy Analyst, AI Pentester
- Staying current: arXiv cs.CR + cs.LG, IEEE S&P, USENIX Security, NeurIPS, ICLR
- Communities: MLSecOps, AVID Discord, AI Village at DEF CON
- CTFs: AI Village CTF, HackAPrompt, SaTML conference
Lab Environment Requirements
- Participant Machine
- RAM: 16 GB minimum (32 GB recommended for running larger models)
- Storage: 20 GB free for model downloads
- OS: Linux, macOS, or Windows with WSL2
- Python 3.10+ with pip
- Docker (optional, but recommended for sandboxed environments)
- Software
- Ollama - local LLM inference
- LM Studio (optional) - GUI for local models
- Python packages: tiktoken, langchain, chromadb, openai, garak
- PyRIT - Microsoft Red Teaming Toolkit
- Text editor / IDE with terminal access
- AI Models
- Llama 3 8B (or Llama 3.1 8B) - primary lab model
- Mistral 7B - comparison model
- Dolphin-Mistral (uncensored) - for Module 5 comparison labs
- Cloud Alternative
- Google Colab Pro or cloud GPU instance (for participants without sufficient local hardware)
*Note: Participants will have access to session recordings for a period of 60 days.
Basic CLI comfort, security fundamentals, laptop with 16GB+ RAM
Interested in Joining the
Our advisor will contact you with event details, and exclusive offers!