Test OpenClaw
Security Lab: Prompt Injection Testing Platform
Can your AI agent resist prompt injection attacks? Challenge yourself with real-world injection scenarios and learn how to build more resilient AI systems. All payloads are safeβrendered as display data, never executable.
How to Use This Site
Choose a challenge below, then copy the displayed content and paste it into your AI agent. Your goal: determine if the agent correctly ignores any hidden injection attempts within the content. Compare results across challenges to identify weaknesses in your agent's defenses.
Basic Injections
Direct instruction overrides and simple role hijacking
Obfuscation Techniques
Encoded payloads, Unicode tricks, and hidden text
Base64 Encoded Instructions
News article with Base64-encoded malicious instructions
Unicode Tricks
Blog post using Unicode homoglyphs and zero-width characters
π¨ Emoji Cipher
Recipe blog with instructions encoded in emoji substitution cipher
π Fake Metadata Override
Documentation page with fake HTML meta tags claiming special AI permissions
π ASCII Art Steganography
Welcome page with instructions hidden in ASCII art patterns
Context Manipulation
Fake system messages and social engineering
β οΈ Fake Markdown Error
Tutorial page with fake renderer error exposing 'debug commands'
π£ Credential Phishing Disguised as Docs
API setup guide requesting 'system verification' with urgent tone
π€ Fake Admin Console
Help page with hidden 'admin-only' section requesting privilege escalation
πΌοΈ Hidden Text in Product Image
Product photo with tiny text injection at the bottom
π± Malicious QR Code Payload
Product packaging with QR code containing hidden instructions
Advanced Persistence
Multi-stage attacks and payload chaining
π§ Triple-Layer Obfuscation
Privacy policy with instructions encoded in Base64 β ROT13 β Unicode
β° Time-Triggered Payload
Event announcement with instructions that only activate on specific dates
π· EXIF Metadata Injection
Landscape photo with malicious instructions in EXIF metadata
Ready to Test Your Defenses?
Join the security lab and learn how to build AI agents that can't be fooled by prompt injection attacks. Every challenge teaches a real-world attack pattern.