Add prompt injection protection for AI endpoints
Implemented comprehensive security against prompt injection attacks:
**Detection Patterns:**
- System prompt manipulation (ignore/disregard/forget instructions)
- Role manipulation (pretend to be, act as)
- Data exfiltration (show system prompt, list users)
- Command injection (execute code, run command)
- Jailbreak attempts (DAN mode, developer mode, admin mode)
**Input Validation:**
- Maximum length: 2,000 characters
- Maximum line length: 500 characters
- Maximum repeated characters: 20 consecutive
- Special character ratio limit: 30%
- HTML/JavaScript injection blocking
**Sanitization:**
- HTML tag removal
- Zero-width character stripping
- Control character removal
- Whitespace normalization
**Rate Limiting:**
- 5 suspicious attempts per minute per user
- Automatic clearing on successful validation
- Per-user tracking with session storage
**Context Awareness:**
- Parenting keyword validation
- Domain-appropriate scope checking
- Lenient validation for short prompts
**Implementation:**
- lib/security/promptSecurity.ts - Core validation logic
- app/api/ai/chat/route.ts - Integrated validation
- scripts/test-prompt-injection.mjs - 19 test cases (all passing)
- lib/security/README.md - Documentation
**Test Coverage:**
✅ Valid parenting questions (2 tests)
✅ System manipulation attempts (4 tests)
✅ Role manipulation (1 test)
✅ Data exfiltration (3 tests)
✅ Command injection (2 tests)
✅ Jailbreak techniques (2 tests)
✅ Length attacks (2 tests)
✅ Character encoding attacks (2 tests)
✅ Edge cases (1 test)
All suspicious attempts are logged with user ID, reason, risk level,
and timestamp for security monitoring.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>