Implemented comprehensive frontend localization infrastructure supporting
5 languages (English, Spanish, French, Portuguese, Chinese) with measurement
unit preferences (Metric/Imperial). This lays the foundation for international
user support.
**Core Infrastructure:**
- Installed i18next, react-i18next, i18next-browser-languagedetector
- Created I18nProvider component integrated into app layout
- Configured i18next with language detection and localStorage persistence
- Created 35 translation files (5 languages × 7 namespaces)
**Translation Namespaces:**
- common: App-wide UI elements, navigation, actions
- tracking: Activity tracking (feeding, sleep, diaper, milestones)
- ai: AI assistant chat interface
- auth: Authentication flows (login, signup, password reset)
- settings: Settings and preferences
- onboarding: Onboarding flow
- errors: Error messages and validation
**Custom Hooks:**
- useTranslation: Type-safe translation wrapper
- useLocale: Language and measurement system management
- useFormatting: Date, time, number, and unit formatting
**Measurement Unit Support:**
- Created unit conversion utilities (weight, height, temperature, volume)
- Metric: kg, cm, °C, ml
- Imperial: lb, in, °F, oz
- Bidirectional conversion functions
**UI Components:**
- LanguageSelector: Dropdown to change app language
- MeasurementUnitSelector: Toggle between Metric/Imperial
- Integrated both into Settings page Preferences section
**Next Steps (Remaining):**
- Add measurement preferences to backend user schema
- Create onboarding flow with language/measurement selection
- Apply translations to existing components (dashboard, tracking forms)
- Implement multi-language AI responses
- Add professional translations (currently using basic translations)
**File Highlights:**
- lib/i18n/config.ts: i18next configuration
- hooks/useFormatting.ts: Formatting utilities with locale support
- lib/utils/unitConversion.ts: Unit conversion logic
- components/settings/*Selector.tsx: Language and measurement selectors
- locales/*/: Translation files for 5 languages
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix login endpoint to return families as array of objects instead of strings
- Update auth interface to match /auth/me endpoint structure
- Add silence detection to voice input (auto-stop after 1.5s)
- Add comprehensive status messages to voice modal (Listening, Understanding, Saving)
- Unify voice input flow to use MediaRecorder + backend for all platforms
- Add null checks to prevent tracking page crashes from invalid data
- Wait for auth completion before loading family data in HomePage
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Set continuous=true to keep listening through pauses
- Only process final results, ignore interim transcripts
- Add usesFallback check to route Web Speech API transcripts through classification
- Desktop now captures complete phrases before classification
- Add detailed logging for debugging recognition flow
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove temperature parameter from GPT-5-mini activity extraction (not supported)
- Add classification state to useVoiceInput hook to avoid duplicate API calls
- Prevent infinite loop in VoiceFloatingButton by tracking lastClassifiedTranscript
- Use classification from backend directly instead of making second request
- iOS Safari now successfully transcribes with Azure Whisper and classifies with GPT-5-mini
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Force MediaRecorder fallback for all iOS Safari devices
- Add iOS device detection to avoid Web Speech API on iOS
- Support multiple audio formats (webm, mp4, default) for compatibility
- Add comprehensive error logging throughout the flow
- Improve error messages with specific guidance for each error type
- Add console logging to track microphone permissions and recording state
- Better handling of getUserMedia permissions
This should help diagnose and fix the "Failed to recognize speech" error
by ensuring iOS Safari uses the MediaRecorder path with proper permissions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Frontend changes:
- Add MediaRecorder fallback for iOS Safari (no Web Speech API support)
- Automatically detect browser capabilities and use appropriate method
- Add usesFallback flag to track which method is being used
- Update UI to show "Recording..." vs "Listening..." based on method
- Add iOS-specific indicator text
- Handle microphone permissions and errors properly
Backend changes:
- Update /api/v1/voice/transcribe to accept both audio files and text
- Support text-based classification (from Web Speech API)
- Support audio file transcription + classification (from MediaRecorder)
- Return unified response format with transcript and classification
How it works:
- Chrome/Edge: Uses Web Speech API for realtime transcription
- iOS Safari: Records audio with MediaRecorder, sends to server for transcription
- Fallback is transparent to the user with appropriate UI feedback
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>