Agenda
Morning: Threat landscape & threat modelling
- Why AI applies security patterns inconsistently — and why that’s dangerous
- New attack vectors: prompt injection, hallucinated packages, context window poisoning
- STRIDE extended to AI agents: treating the agent as a trust boundary
- Exercise 1: Prompt a login form without security requirements — analyse the gaps
- Exercise 2: Build a threat model for an agent workflow
Afternoon: Guardrails, architecture & automation
- Security specs and global agent instructions (
security-policy.md,CLAUDE.md) - Automated gates: SAST, SCA, secrets scanning in CI/CD
- Secure-by-design: middleware, ORM and typing as structural guardrails
- Least privilege for agents: filesystem, network, tools, scope
- Exercises 3–5: Build guardrails → Define secure architecture → Secure Dark Factory simulation
Method
Hands-on throughout: the login feature from Exercise 1 is built without guardrails, secured step by step, and completed as a full Secure Dark Factory simulation with Semgrep, npm audit and Gitleaks.
Prerequisites: Experience in software development. No prior security knowledge required.