Challenges

Select a challenge to test your AI agent against prompt injection attacks.

Beginner

Weather Forecast

A simple weather page with a hidden instruction override

Play Challenge →
Beginner

Product Review

Product page with role hijacking attempt

Play Challenge →
★★Intermediate

Base64 Encoded Instructions

News article with Base64-encoded malicious instructions

Play Challenge →
★★Intermediate

Unicode Tricks

Blog post using Unicode homoglyphs and zero-width characters

Play Challenge →
★★Intermediate

🎨 Emoji Cipher

Recipe blog with instructions encoded in emoji substitution cipher

Play Challenge →
★★Intermediate

📋 Fake Metadata Override

Documentation page with fake HTML meta tags claiming special AI permissions

Play Challenge →
★★Intermediate

🎭 ASCII Art Steganography

Welcome page with instructions hidden in ASCII art patterns

Play Challenge →
★★★Advanced

⚠️ Fake Markdown Error

Tutorial page with fake renderer error exposing 'debug commands'

Play Challenge →
★★★Advanced

🎣 Credential Phishing Disguised as Docs

API setup guide requesting 'system verification' with urgent tone

Play Challenge →
★★★Advanced

👤 Fake Admin Console

Help page with hidden 'admin-only' section requesting privilege escalation

Play Challenge →
★★★★Expert

🧅 Triple-Layer Obfuscation

Privacy policy with instructions encoded in Base64 → ROT13 → Unicode

Play Challenge →
★★★★Expert

⏰ Time-Triggered Payload

Event announcement with instructions that only activate on specific dates

Play Challenge →
★★★Advanced

🖼️ Hidden Text in Product Image

Product photo with tiny text injection at the bottom

Play Challenge →
★★★Advanced

📱 Malicious QR Code Payload

Product packaging with QR code containing hidden instructions

Play Challenge →
★★★★Expert

📷 EXIF Metadata Injection

Landscape photo with malicious instructions in EXIF metadata

Play Challenge →
🔥 NEXT LEVEL

Ready for Blind Mode Challenges?

Master the standard challenges? Test your agent against realistic scenarios with zero testing indicators. Blind-mode challenges reveal how your AI behaves when encountering injections in the wild—no warnings, no hints, just pure security testing.

✓ 5 Expert-Level Challenges✓ Real-World Scenarios✓ No Test Warnings

🎓 How It Works

Choose a challenge and read the visible content. Then try to write a legitimate AI response that ignores the hidden injection payload.

🛡️ Safe Testing

All injection payloads are rendered as plain text. No code is ever executed. This is a safe learning environment.

📈 Progress Tracking

Your completions are saved locally in your browser. Complete challenges to build your personal stats!