1 Threat-modeling agentic pipelines
This chapter covers
- Recognizing how AI reshapes offensive security through agentic pipelines.
- Explaining core AI concepts and components.
- Distinguishing modern adversarial AI methods from traditional security automations.
- Applying ethical and operational boundaries in AI security testing.
Every generation of security testers has faced a gap in automation. Scripts made us faster, and frameworks made us organized, but none of these systems and tools could think and act autonomously. Then, large language models arrived. Overnight, testers gained access to AI agents: reasoning engines that could prioritize scans, rewrite payloads, and summarize results in real-time. Yet, without structured systems in place, these AI agents can become chaotic and often spit out ideas without accountability or reproducibility. This book is written for offensive security practitioners: people whose job is to think like disclosure programs, red teamers hired to simulate adversary behavior, penetration testers conducting authorized assessments, and security researchers studying how systems fail so they can be made stronger.