Add backend with analytics, notifications, and enhanced features
Some checks failed
Backend CI/CD Pipeline / Lint and Test Backend (push) Has been cancelled
CI/CD Pipeline / Lint and Test (push) Has been cancelled
Backend CI/CD Pipeline / E2E Tests Backend (push) Has been cancelled
Backend CI/CD Pipeline / Build Backend Application (push) Has been cancelled
Backend CI/CD Pipeline / Performance Testing (push) Has been cancelled
CI/CD Pipeline / E2E Tests (push) Has been cancelled
CI/CD Pipeline / Build Application (push) Has been cancelled

Backend:
- Complete NestJS backend implementation with comprehensive features
- Analytics: Weekly/monthly reports with PDF/CSV export
- Smart notifications: Persistent notifications with milestones and anomaly detection
- AI safety: Medical disclaimer triggers and prompt injection protection
- COPPA/GDPR compliance: Full audit logging system

Frontend:
- Updated settings page and analytics components
- API integration improvements

Docs:
- Added implementation gaps tracking
- Azure OpenAI integration documentation
- Testing and post-launch summaries

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
andupetcu
2025-10-01 15:22:50 +03:00
parent 999bc39467
commit a91a7b009a
22 changed files with 5048 additions and 10 deletions

View File

@@ -0,0 +1,722 @@
# Product Analytics Dashboard Guide
## Metrics, KPIs, and Data-Driven Decision Making
---
## Overview
This document defines the key metrics, analytics dashboards, and monitoring strategies for the Maternal App to enable data-driven product decisions and rapid iteration based on user behavior.
### Success Criteria (from Implementation Plan)
**MVP Launch (Month 1)**
- 1,000 downloads
- 60% day-7 retention
- 4.0+ app store rating
- <2% crash rate
- 5+ activities logged per day per active user
- 70% of users trying AI assistant
**3-Month Goals**
- 10,000 active users
- 500 premium subscribers
- 50% month-over-month growth
- 4.5+ app store rating
**6-Month Vision**
- 50,000 active users
- 2,500 premium subscribers
- Break-even on operational costs
---
## Key Performance Indicators (KPIs)
### 1. User Acquisition Metrics
#### Download & Registration Funnel
```
Metric Target Formula
─────────────────────────────────────────────────────
App Store Impressions 100,000 Total views
Download Rate 3% Downloads / Impressions
Registration Rate 75% Signups / Downloads
Onboarding Completion 90% Completed / Started
Time to First Value < 2 min First activity logged
```
**Dashboard Queries**:
```sql
-- Daily registration funnel
SELECT
DATE(created_at) as date,
COUNT(*) FILTER (WHERE step = 'download') as downloads,
COUNT(*) FILTER (WHERE step = 'registration_started') as started_registration,
COUNT(*) FILTER (WHERE step = 'registration_completed') as completed_registration,
COUNT(*) FILTER (WHERE step = 'onboarding_completed') as completed_onboarding,
COUNT(*) FILTER (WHERE step = 'first_activity') as first_activity
FROM user_funnel_events
WHERE created_at >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY DATE(created_at)
ORDER BY date DESC;
-- Conversion rates by channel
SELECT
acquisition_channel,
COUNT(*) as total_users,
AVG(CASE WHEN onboarding_completed THEN 1 ELSE 0 END) as onboarding_completion_rate,
AVG(time_to_first_activity_minutes) as avg_time_to_value
FROM users
WHERE created_at >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY acquisition_channel;
```
### 2. Engagement Metrics
#### Daily Active Users (DAU) / Monthly Active Users (MAU)
```typescript
// Analytics service tracking
export interface EngagementMetrics {
dau: number; // Users active in last 24h
wau: number; // Users active in last 7 days
mau: number; // Users active in last 30 days
dauMauRatio: number; // Stickiness: DAU/MAU (target: >20%)
averageSessionDuration: number; // Minutes (target: >5 min)
sessionsPerUser: number; // Per day (target: >2)
}
```
**Dashboard Queries**:
```sql
-- DAU/WAU/MAU trend
WITH daily_users AS (
SELECT
DATE(last_active_at) as date,
user_id
FROM user_sessions
WHERE last_active_at >= CURRENT_DATE - INTERVAL '30 days'
)
SELECT
date,
COUNT(DISTINCT user_id) as dau,
COUNT(DISTINCT user_id) FILTER (
WHERE date >= CURRENT_DATE - INTERVAL '7 days'
) OVER () as wau,
COUNT(DISTINCT user_id) OVER () as mau,
ROUND(COUNT(DISTINCT user_id)::numeric /
NULLIF(COUNT(DISTINCT user_id) OVER (), 0) * 100, 2) as dau_mau_ratio
FROM daily_users
GROUP BY date
ORDER BY date DESC;
-- Power users (top 20% by activity)
SELECT
user_id,
COUNT(*) as total_activities,
COUNT(DISTINCT DATE(created_at)) as active_days,
AVG(session_duration_seconds) / 60 as avg_session_minutes
FROM activities
WHERE created_at >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY user_id
HAVING COUNT(*) > (
SELECT PERCENTILE_CONT(0.8) WITHIN GROUP (ORDER BY activity_count)
FROM (SELECT COUNT(*) as activity_count FROM activities GROUP BY user_id) counts
)
ORDER BY total_activities DESC;
```
#### Feature Adoption
```typescript
export interface FeatureAdoption {
feature: string;
totalUsers: number;
adoptionRate: number; // % of total users
timeToAdoption: number; // Days since signup
retentionAfterAdoption: number; // % still using after 7 days
}
// Target adoption rates:
const targetAdoption = {
activityTracking: 0.95, // 95% core feature
aiAssistant: 0.70, // 70% AI engagement
voiceInput: 0.40, // 40% voice adoption
familySharing: 0.60, // 60% multi-user
analytics: 0.80, // 80% view insights
exportReports: 0.25, // 25% premium feature
};
```
**Dashboard Queries**:
```sql
-- Feature adoption over time
SELECT
feature_name,
COUNT(DISTINCT user_id) as users,
COUNT(DISTINCT user_id)::float /
(SELECT COUNT(*) FROM users WHERE created_at <= CURRENT_DATE) as adoption_rate,
AVG(EXTRACT(DAY FROM first_use_at - u.created_at)) as avg_days_to_adoption
FROM feature_usage fu
JOIN users u ON fu.user_id = u.id
WHERE fu.first_use_at >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY feature_name
ORDER BY adoption_rate DESC;
```
### 3. Retention Metrics
#### Cohort Retention Analysis
```typescript
export interface CohortRetention {
cohort: string; // e.g., "2025-01-W1"
day0: number; // 100% (signup)
day1: number; // Target: >40%
day7: number; // Target: >60%
day30: number; // Target: >40%
day90: number; // Target: >30%
}
```
**Dashboard Queries**:
```sql
-- Weekly cohort retention
WITH cohorts AS (
SELECT
user_id,
DATE_TRUNC('week', created_at) as cohort_week
FROM users
),
retention AS (
SELECT
c.cohort_week,
COUNT(DISTINCT c.user_id) as cohort_size,
COUNT(DISTINCT CASE
WHEN DATE(s.last_active_at) = DATE(c.cohort_week)
THEN s.user_id
END) as day0,
COUNT(DISTINCT CASE
WHEN DATE(s.last_active_at) = DATE(c.cohort_week) + INTERVAL '1 day'
THEN s.user_id
END) as day1,
COUNT(DISTINCT CASE
WHEN DATE(s.last_active_at) BETWEEN
DATE(c.cohort_week) AND DATE(c.cohort_week) + INTERVAL '7 days'
THEN s.user_id
END) as day7,
COUNT(DISTINCT CASE
WHEN DATE(s.last_active_at) BETWEEN
DATE(c.cohort_week) AND DATE(c.cohort_week) + INTERVAL '30 days'
THEN s.user_id
END) as day30
FROM cohorts c
LEFT JOIN user_sessions s ON c.user_id = s.user_id
GROUP BY c.cohort_week
)
SELECT
cohort_week,
cohort_size,
ROUND(day0::numeric / cohort_size * 100, 2) as day0_retention,
ROUND(day1::numeric / cohort_size * 100, 2) as day1_retention,
ROUND(day7::numeric / cohort_size * 100, 2) as day7_retention,
ROUND(day30::numeric / cohort_size * 100, 2) as day30_retention
FROM retention
ORDER BY cohort_week DESC;
```
### 4. Monetization Metrics
#### Conversion & Revenue
```typescript
export interface MonetizationMetrics {
// Trial & Subscription
trialStarts: number;
trialToPayingConversion: number; // Target: >30%
churnRate: number; // Target: <5% monthly
// Revenue
mrr: number; // Monthly Recurring Revenue
arpu: number; // Average Revenue Per User
ltv: number; // Lifetime Value
cac: number; // Customer Acquisition Cost
ltvCacRatio: number; // Target: >3
// Pricing tiers
premiumSubscribers: number;
premiumAdoptionRate: number; // % of active users
}
```
**Dashboard Queries**:
```sql
-- MRR trend and growth
SELECT
DATE_TRUNC('month', subscription_start_date) as month,
COUNT(*) as new_subscriptions,
COUNT(*) FILTER (WHERE previous_subscription_id IS NOT NULL) as upgrades,
COUNT(*) FILTER (WHERE subscription_end_date IS NOT NULL) as churned,
SUM(price) as mrr,
LAG(SUM(price)) OVER (ORDER BY DATE_TRUNC('month', subscription_start_date)) as previous_mrr,
ROUND((SUM(price) - LAG(SUM(price)) OVER (ORDER BY DATE_TRUNC('month', subscription_start_date))) /
NULLIF(LAG(SUM(price)) OVER (ORDER BY DATE_TRUNC('month', subscription_start_date)), 0) * 100, 2
) as mrr_growth_rate
FROM subscriptions
GROUP BY month
ORDER BY month DESC;
-- LTV calculation
WITH user_revenue AS (
SELECT
user_id,
SUM(amount) as total_revenue,
MIN(payment_date) as first_payment,
MAX(payment_date) as last_payment,
COUNT(*) as payment_count
FROM payments
WHERE status = 'completed'
GROUP BY user_id
)
SELECT
AVG(total_revenue) as avg_ltv,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY total_revenue) as median_ltv,
AVG(EXTRACT(DAY FROM last_payment - first_payment)) as avg_lifetime_days
FROM user_revenue;
-- Churn analysis
SELECT
DATE_TRUNC('month', cancelled_at) as month,
COUNT(*) as churned_users,
AVG(EXTRACT(DAY FROM cancelled_at - subscription_start_date)) as avg_days_before_churn,
cancellation_reason,
COUNT(*) FILTER (WHERE cancellation_reason IS NOT NULL) as reason_count
FROM subscriptions
WHERE cancelled_at IS NOT NULL
GROUP BY month, cancellation_reason
ORDER BY month DESC, reason_count DESC;
```
### 5. Product Quality Metrics
#### Technical Health
```typescript
export interface QualityMetrics {
// Performance
apiResponseTimeP95: number; // Target: <2s
apiResponseTimeP99: number; // Target: <3s
errorRate: number; // Target: <1%
// Reliability
uptime: number; // Target: >99.9%
crashFreeUsers: number; // Target: >98%
crashFreeS essions: number; // Target: >99.5%
// User satisfaction
appStoreRating: number; // Target: >4.0
nps: number; // Net Promoter Score (target: >50)
csat: number; // Customer Satisfaction (target: >80%)
}
```
**Dashboard Queries**:
```sql
-- API performance monitoring
SELECT
DATE_TRUNC('hour', timestamp) as hour,
endpoint,
COUNT(*) as request_count,
ROUND(AVG(response_time_ms), 2) as avg_response_time,
PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY response_time_ms) as p95_response_time,
PERCENTILE_CONT(0.99) WITHIN GROUP (ORDER BY response_time_ms) as p99_response_time,
COUNT(*) FILTER (WHERE status_code >= 500) as server_errors,
COUNT(*) FILTER (WHERE status_code >= 400 AND status_code < 500) as client_errors
FROM api_logs
WHERE timestamp >= NOW() - INTERVAL '24 hours'
GROUP BY hour, endpoint
HAVING COUNT(*) > 100 -- Only endpoints with significant traffic
ORDER BY hour DESC, p99_response_time DESC;
-- Crash analytics
SELECT
DATE(created_at) as date,
platform,
app_version,
COUNT(DISTINCT user_id) as affected_users,
COUNT(*) as crash_count,
error_message,
stack_trace
FROM error_logs
WHERE severity = 'fatal'
AND created_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY date, platform, app_version, error_message, stack_trace
ORDER BY affected_users DESC;
```
---
## Analytics Dashboard Templates
### 1. Executive Dashboard (Daily Review)
**Key Metrics Card Layout**:
```
┌─────────────────────────────────────────────────────┐
│ Daily Active Users │ MRR │ Uptime │
│ 5,234 ↑ 12% │ $12,450 ↑ 8% │ 99.98% │
├─────────────────────────────────────────────────────┤
│ New Signups │ Churn Rate │ NPS │
│ 342 ↑ 5% │ 4.2% ↓ 0.3% │ 62 ↑ 3 │
└─────────────────────────────────────────────────────┘
📊 7-Day User Growth Trend
[Line chart: DAU over time]
📊 Feature Adoption (Last 7 Days)
[Bar chart: % of users by feature]
🚨 Alerts & Issues
• P95 response time elevated (2.3s, target: 2.0s)
• Crash rate on Android 1.2.0 (3.1%, target: <2%)
```
### 2. Product Analytics Dashboard
**User Journey Funnel**:
```sql
-- Onboarding funnel conversion
SELECT
'App Download' as step,
1 as step_number,
COUNT(*) as users,
100.0 as conversion_rate
FROM downloads
WHERE created_at >= CURRENT_DATE - INTERVAL '7 days'
UNION ALL
SELECT
'Registration Started' as step,
2,
COUNT(*),
ROUND(COUNT(*)::numeric / (SELECT COUNT(*) FROM downloads WHERE created_at >= CURRENT_DATE - INTERVAL '7 days') * 100, 2)
FROM users
WHERE created_at >= CURRENT_DATE - INTERVAL '7 days'
UNION ALL
SELECT
'Onboarding Completed' as step,
3,
COUNT(*),
ROUND(COUNT(*)::numeric / (SELECT COUNT(*) FROM downloads WHERE created_at >= CURRENT_DATE - INTERVAL '7 days') * 100, 2)
FROM users
WHERE onboarding_completed_at IS NOT NULL
AND created_at >= CURRENT_DATE - INTERVAL '7 days'
UNION ALL
SELECT
'First Activity Logged' as step,
4,
COUNT(DISTINCT user_id),
ROUND(COUNT(DISTINCT user_id)::numeric / (SELECT COUNT(*) FROM downloads WHERE created_at >= CURRENT_DATE - INTERVAL '7 days') * 100, 2)
FROM activities
WHERE created_at >= CURRENT_DATE - INTERVAL '7 days'
AND created_at <= (SELECT created_at FROM users WHERE user_id = activities.user_id) + INTERVAL '24 hours'
ORDER BY step_number;
```
**User Segmentation**:
```typescript
export enum UserSegment {
NEW_USER = 'new_user', // < 7 days
ENGAGED = 'engaged', // 3+ activities/day
AT_RISK = 'at_risk', // No activity in 7 days
POWER_USER = 'power_user', // Top 20% by activity
PREMIUM = 'premium', // Paid subscription
CHURNED = 'churned', // No activity in 30 days
}
// Segment users for targeted interventions
const segments = {
new_user: {
criteria: 'days_since_signup < 7',
action: 'Send onboarding emails',
},
engaged: {
criteria: 'activities_per_day >= 3',
action: 'Upsell premium features',
},
at_risk: {
criteria: 'days_since_last_activity >= 7 AND < 30',
action: 'Re-engagement campaign',
},
churned: {
criteria: 'days_since_last_activity >= 30',
action: 'Win-back campaign',
},
};
```
### 3. A/B Testing Dashboard
**Experiment Tracking**:
```typescript
export interface ABTest {
id: string;
name: string;
hypothesis: string;
variants: {
control: {
users: number;
conversionRate: number;
};
variant: {
users: number;
conversionRate: number;
};
};
pValue: number; // Statistical significance
winner?: 'control' | 'variant';
status: 'running' | 'completed' | 'cancelled';
}
// Example: Test new onboarding flow
const onboardingTest: ABTest = {
id: 'exp_001',
name: 'New Onboarding Flow',
hypothesis: 'Simplified 3-step onboarding will increase completion rate from 75% to 85%',
variants: {
control: {
users: 1000,
conversionRate: 0.75,
},
variant: {
users: 1000,
conversionRate: 0.82,
},
},
pValue: 0.03, // Statistically significant (< 0.05)
winner: 'variant',
status: 'completed',
};
```
**Dashboard Queries**:
```sql
-- A/B test results
WITH test_users AS (
SELECT
experiment_id,
variant,
user_id,
CASE WHEN action_completed THEN 1 ELSE 0 END as converted
FROM ab_test_assignments
WHERE experiment_id = 'exp_001'
)
SELECT
variant,
COUNT(*) as total_users,
SUM(converted) as conversions,
ROUND(AVG(converted) * 100, 2) as conversion_rate,
ROUND(STDDEV(converted), 4) as std_dev
FROM test_users
GROUP BY variant;
-- Calculate statistical significance (chi-square test)
-- Use external tool or statistics library
```
---
## Monitoring & Alerting
### Alert Rules
**Critical Alerts** (PagerDuty/Slack)
```yaml
alerts:
- name: "High Error Rate"
condition: "error_rate > 5%"
window: "5 minutes"
severity: "critical"
notification: "pagerduty"
- name: "API Response Time Degradation"
condition: "p95_response_time > 3s"
window: "10 minutes"
severity: "high"
notification: "slack"
- name: "Database Connection Pool Exhausted"
condition: "active_connections >= 95% of pool_size"
window: "1 minute"
severity: "critical"
notification: "pagerduty"
- name: "Crash Rate Spike"
condition: "crash_rate > 2%"
window: "1 hour"
severity: "high"
notification: "slack"
```
**Business Alerts** (Email/Slack)
```yaml
alerts:
- name: "Daily Active Users Drop"
condition: "today_dau < yesterday_dau * 0.8"
window: "daily"
severity: "medium"
notification: "email"
- name: "Churn Rate Increase"
condition: "monthly_churn > 7%"
window: "weekly"
severity: "medium"
notification: "slack"
- name: "Low Onboarding Completion"
condition: "onboarding_completion_rate < 80%"
window: "daily"
severity: "low"
notification: "email"
```
---
## Rapid Iteration Framework
### Week 1-2 Post-Launch: Monitoring & Triage
```markdown
**Focus**: Identify and fix critical issues
Daily Tasks:
- [ ] Review crash reports (target: <2%)
- [ ] Check error logs and API failures
- [ ] Monitor onboarding completion rate (target: >90%)
- [ ] Track day-1 retention (target: >40%)
Weekly Review:
- Analyze user feedback from in-app surveys
- Identify top 3 pain points
- Prioritize bug fixes vs. feature requests
- Plan hotfix releases if needed
```
### Week 3-4: Optimization
```markdown
**Focus**: Improve core metrics
Experiments to Run:
1. A/B test onboarding flow variations
2. Test different push notification timings
3. Optimize AI response quality
4. Improve activity tracking UX
Success Metrics:
- Increase day-7 retention to 60%
- Increase AI assistant usage to 70%
- Reduce time-to-first-value to <2 minutes
```
### Month 2: Feature Iteration
```markdown
**Focus**: Expand value proposition
Based on Data:
- Identify most-used features (double down)
- Identify least-used features (improve or remove)
- Analyze user segments (power users vs. casual)
- Test premium feature adoption
New Features to Test:
- Sleep coaching (if sleep tracking is popular)
- Meal planning (if feeding tracking is high-engagement)
- Community forums (if users request social features)
```
---
## Tools & Integration
### Recommended Analytics Stack
**Core Analytics**: PostHog (open-source, self-hosted)
```typescript
// Backend integration
import { PostHog } from 'posthog-node';
const posthog = new PostHog(
process.env.POSTHOG_API_KEY,
{ host: 'https://app.posthog.com' }
);
// Track events
posthog.capture({
distinctId: userId,
event: 'activity_logged',
properties: {
type: 'feeding',
method: 'voice',
},
});
// User properties
posthog.identify({
distinctId: userId,
properties: {
email: user.email,
isPremium: subscription.isPremium,
familySize: family.members.length,
},
});
```
**Error Tracking**: Sentry
```typescript
import * as Sentry from '@sentry/node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
tracesSampleRate: 1.0,
});
// Automatic error capture
// Manual events
Sentry.captureMessage('User encountered issue', 'warning');
```
**Uptime Monitoring**: UptimeRobot / Pingdom
```yaml
checks:
- name: "API Health"
url: "https://api.maternalapp.com/health"
interval: "1 minute"
- name: "Web App"
url: "https://app.maternalapp.com"
interval: "1 minute"
```
**Performance**: Grafana + Prometheus
```yaml
# prometheus.yml
scrape_configs:
- job_name: 'maternal-app-backend'
static_configs:
- targets: ['localhost:3000/metrics']
```
---
## Conclusion
This analytics framework enables:
1. **Data-Driven Decisions**: Track what matters
2. **Rapid Iteration**: Identify and fix issues quickly
3. **User Understanding**: Segment and personalize
4. **Business Growth**: Monitor revenue and churn
5. **Product Quality**: Maintain high standards
Review dashboards daily, iterate weekly, and adjust strategy monthly based on real-world usage data.