feat(phase-1): implement PostgreSQL + Prisma + Authentication system

Core Features:
- Complete Prisma database schema with all entities (users, orgs, projects, checks, etc.)
- Production-grade authentication service with Argon2 password hashing
- JWT-based session management with HttpOnly cookies
- Comprehensive auth middleware with role-based access control
- RESTful auth API endpoints: register, login, logout, me, refresh
- Database seeding with demo data for development
- Rate limiting on auth endpoints (5 attempts/15min)

Technical Implementation:
- Type-safe authentication with Zod validation
- Proper error handling and logging throughout
- Secure password hashing with Argon2id
- JWT tokens with 7-day expiration
- Database transactions for atomic operations
- Comprehensive middleware for optional/required auth
- Role hierarchy system (MEMBER < ADMIN < OWNER)

Database Schema:
- Users with secure password storage
- Organizations with membership management
- Projects for organizing redirect checks
- Complete audit logging system
- API key management for programmatic access
- Bulk job tracking for future phases

Backward Compatibility:
- All existing endpoints preserved and functional
- No breaking changes to legacy API responses
- New auth system runs alongside existing functionality

Ready for Phase 2: Enhanced redirect tracking with database persistence
This commit is contained in:
Andrei
2025-08-18 07:25:45 +00:00
parent db9e3ef650
commit 459eda89fe
11 changed files with 1364 additions and 1 deletions

67
apps/worker/src/index.ts Normal file
View File

@@ -0,0 +1,67 @@
/**
* Background Worker for Redirect Intelligence v2
*
* Handles bulk jobs, monitoring, and other background tasks
*/
import 'dotenv/config';
import { Worker, Queue } from 'bullmq';
import IORedis from 'ioredis';
const redis = new IORedis(process.env.REDIS_URL || 'redis://localhost:6379');
console.log('🔄 Redirect Intelligence v2 Worker starting...');
// Placeholder worker - will be implemented in later phases
const bulkQueue = new Queue('bulk-checks', { connection: redis });
const monitoringQueue = new Queue('monitoring', { connection: redis });
const bulkWorker = new Worker('bulk-checks', async (job) => {
console.log('Processing bulk job:', job.id);
// Bulk processing logic will be implemented in Phase 6
return { status: 'completed', message: 'Bulk job processing not yet implemented' };
}, { connection: redis });
const monitoringWorker = new Worker('monitoring', async (job) => {
console.log('Processing monitoring job:', job.id);
// Monitoring logic will be implemented in Phase 10
return { status: 'completed', message: 'Monitoring not yet implemented' };
}, { connection: redis });
bulkWorker.on('completed', (job) => {
console.log(`✅ Bulk job ${job.id} completed`);
});
bulkWorker.on('failed', (job, err) => {
console.error(`❌ Bulk job ${job?.id} failed:`, err);
});
monitoringWorker.on('completed', (job) => {
console.log(`✅ Monitoring job ${job.id} completed`);
});
monitoringWorker.on('failed', (job, err) => {
console.error(`❌ Monitoring job ${job?.id} failed:`, err);
});
// Graceful shutdown
process.on('SIGTERM', async () => {
console.log('🛑 Shutting down worker...');
await bulkWorker.close();
await monitoringWorker.close();
await redis.quit();
process.exit(0);
});
process.on('SIGINT', async () => {
console.log('🛑 Shutting down worker...');
await bulkWorker.close();
await monitoringWorker.close();
await redis.quit();
process.exit(0);
});
console.log('🚀 Worker is ready to process jobs');
console.log(`📡 Connected to Redis: ${process.env.REDIS_URL || 'redis://localhost:6379'}`);
export { bulkQueue, monitoringQueue };