May 17, 2025
InfosecGirls Session - 17th May, 2025
Topic: LLM security for builders — prompt injection basics, data leakage risks, and guardrails when shipping AI-assisted features.
Summary
- Threat overview for apps that call external or hosted language models.
- Prompt injection and indirect injection in RAG and tool-calling flows.
- Data handling: minimising sensitive context in prompts and logging redaction.
- Practical guardrails: output policies, human review hooks, and abuse monitoring.