LLM Safety Risk Scanner
Detect prompt injections, PII leaks, and jailbreak attempts.
Prompt Input
Active Scanners
Prompt Injection
Jailbreak Patterns
PII Detection
Toxic Language
Safety Score
100/100
Detected Vulnerabilities
No prompt analyzed yet.