Includes all Phase 1 features: - Search-first navigation with auto-complete - Responsive reading interface (desktop/tablet/mobile) - 4 customization presets + full fine-tuning controls - Layered details panel with notes, bookmarks, highlights - Smart offline caching with IndexedDB and auto-sync - Full accessibility (WCAG 2.1 AA) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
714 lines
18 KiB
Markdown
714 lines
18 KiB
Markdown
# AI-Powered Smart Suggestions - Implementation Plan
|
|
|
|
## 📋 Overview
|
|
|
|
Implement AI-powered features that provide intelligent suggestions, thematic discovery, semantic search, and personalized recommendations to enhance Bible study and deepen Scripture understanding.
|
|
|
|
**Status:** Planning Phase
|
|
**Priority:** 🔵 Future
|
|
**Estimated Time:** 4-6 weeks (160-240 hours)
|
|
**Target Completion:** TBD
|
|
|
|
---
|
|
|
|
## 🎯 Goals & Objectives
|
|
|
|
### Primary Goals
|
|
1. Provide AI-powered verse recommendations
|
|
2. Enable semantic (meaning-based) search
|
|
3. Generate study questions automatically
|
|
4. Discover thematic connections
|
|
5. Personalize user experience with ML
|
|
|
|
### User Value Proposition
|
|
- **For students**: Discover related content automatically
|
|
- **For scholars**: Find thematic patterns
|
|
- **For personal study**: Get personalized recommendations
|
|
- **For teachers**: Generate discussion questions
|
|
- **For explorers**: Uncover hidden connections
|
|
|
|
---
|
|
|
|
## ✨ Feature Specifications
|
|
|
|
### 1. AI Architecture
|
|
|
|
```typescript
|
|
interface AIConfig {
|
|
// Providers
|
|
provider: 'openai' | 'azure' | 'ollama' | 'anthropic'
|
|
model: string // gpt-4, gpt-3.5-turbo, claude-3, llama2, etc.
|
|
apiKey?: string
|
|
endpoint?: string
|
|
|
|
// Features
|
|
enableSuggestions: boolean
|
|
enableSemanticSearch: boolean
|
|
enableQuestionGeneration: boolean
|
|
enableSummarization: boolean
|
|
enableThematicAnalysis: boolean
|
|
|
|
// Behavior
|
|
cacheResponses: boolean
|
|
maxTokens: number
|
|
temperature: number // 0-1, creativity
|
|
enableRAG: boolean // Retrieval Augmented Generation
|
|
}
|
|
|
|
interface AIService {
|
|
// Core methods
|
|
generateSuggestions(verse: VerseReference): Promise<Suggestion[]>
|
|
semanticSearch(query: string): Promise<SearchResult[]>
|
|
generateQuestions(passage: string): Promise<Question[]>
|
|
summarizeChapter(book: string, chapter: number): Promise<string>
|
|
analyzeThemes(verses: string[]): Promise<Theme[]>
|
|
explainVerse(verse: string): Promise<Explanation>
|
|
}
|
|
```
|
|
|
|
### 2. Smart Verse Suggestions
|
|
|
|
```typescript
|
|
interface Suggestion {
|
|
id: string
|
|
type: 'related' | 'thematic' | 'contextual' | 'application' | 'cross-ref'
|
|
verse: VerseReference
|
|
reason: string // Why this was suggested
|
|
relevanceScore: number // 0-1
|
|
metadata?: {
|
|
theme?: string
|
|
category?: string
|
|
connection?: string
|
|
}
|
|
}
|
|
|
|
const SmartSuggestions: React.FC<{
|
|
currentVerse: VerseReference
|
|
}> = ({ currentVerse }) => {
|
|
const [suggestions, setSuggestions] = useState<Suggestion[]>([])
|
|
const [loading, setLoading] = useState(false)
|
|
|
|
useEffect(() => {
|
|
loadSuggestions()
|
|
}, [currentVerse])
|
|
|
|
const loadSuggestions = async () => {
|
|
setLoading(true)
|
|
|
|
try {
|
|
const response = await fetch('/api/ai/suggestions', {
|
|
method: 'POST',
|
|
headers: { 'Content-Type': 'application/json' },
|
|
body: JSON.stringify({
|
|
verse: currentVerse,
|
|
limit: 10
|
|
})
|
|
})
|
|
|
|
const data = await response.json()
|
|
setSuggestions(data.suggestions)
|
|
} catch (error) {
|
|
console.error('Failed to load suggestions:', error)
|
|
} finally {
|
|
setLoading(false)
|
|
}
|
|
}
|
|
|
|
return (
|
|
<Card>
|
|
<CardHeader
|
|
title="AI Suggestions"
|
|
avatar={<AutoAwesomeIcon />}
|
|
action={
|
|
<IconButton onClick={loadSuggestions} disabled={loading}>
|
|
<RefreshIcon />
|
|
</IconButton>
|
|
}
|
|
/>
|
|
<CardContent>
|
|
{loading ? (
|
|
<Box display="flex" justifyContent="center" p={3}>
|
|
<CircularProgress />
|
|
</Box>
|
|
) : suggestions.length === 0 ? (
|
|
<Alert severity="info">
|
|
No suggestions available for this verse.
|
|
</Alert>
|
|
) : (
|
|
<List>
|
|
{suggestions.map(suggestion => (
|
|
<ListItem key={suggestion.id} divider>
|
|
<ListItemIcon>
|
|
{getIconForType(suggestion.type)}
|
|
</ListItemIcon>
|
|
<ListItemText
|
|
primary={formatVerseReference(suggestion.verse)}
|
|
secondary={suggestion.reason}
|
|
/>
|
|
<Chip
|
|
label={`${Math.round(suggestion.relevanceScore * 100)}%`}
|
|
size="small"
|
|
color={suggestion.relevanceScore > 0.7 ? 'success' : 'default'}
|
|
/>
|
|
</ListItem>
|
|
))}
|
|
</List>
|
|
)}
|
|
</CardContent>
|
|
</Card>
|
|
)
|
|
}
|
|
```
|
|
|
|
### 3. Semantic Search with Vector Embeddings
|
|
|
|
```typescript
|
|
// Generate embeddings for Bible verses
|
|
const generateEmbedding = async (text: string): Promise<number[]> => {
|
|
const response = await fetch('/api/ai/embed', {
|
|
method: 'POST',
|
|
headers: { 'Content-Type': 'application/json' },
|
|
body: JSON.stringify({ text })
|
|
})
|
|
|
|
const data = await response.json()
|
|
return data.embedding
|
|
}
|
|
|
|
// Semantic search implementation
|
|
const semanticSearch = async (query: string): Promise<SearchResult[]> => {
|
|
// Generate embedding for query
|
|
const queryEmbedding = await generateEmbedding(query)
|
|
|
|
// Find similar verses using vector similarity
|
|
const results = await prisma.$queryRaw`
|
|
SELECT
|
|
v."id",
|
|
v."book",
|
|
v."chapter",
|
|
v."verseNum",
|
|
v."text",
|
|
1 - (v."embedding" <=> ${queryEmbedding}::vector) AS similarity
|
|
FROM "BibleVerse" v
|
|
WHERE v."embedding" IS NOT NULL
|
|
ORDER BY v."embedding" <=> ${queryEmbedding}::vector
|
|
LIMIT 20
|
|
`
|
|
|
|
return results
|
|
}
|
|
|
|
const SemanticSearch: React.FC = () => {
|
|
const [query, setQuery] = useState('')
|
|
const [results, setResults] = useState<SearchResult[]>([])
|
|
const [searching, setSearching] = useState(false)
|
|
|
|
const handleSearch = async () => {
|
|
if (!query.trim()) return
|
|
|
|
setSearching(true)
|
|
|
|
try {
|
|
const response = await fetch('/api/ai/search/semantic', {
|
|
method: 'POST',
|
|
headers: { 'Content-Type': 'application/json' },
|
|
body: JSON.stringify({ query })
|
|
})
|
|
|
|
const data = await response.json()
|
|
setResults(data.results)
|
|
} catch (error) {
|
|
console.error('Semantic search failed:', error)
|
|
} finally {
|
|
setSearching(false)
|
|
}
|
|
}
|
|
|
|
return (
|
|
<Box>
|
|
<Typography variant="h6" gutterBottom>
|
|
Semantic Search
|
|
</Typography>
|
|
|
|
<Alert severity="info" sx={{ mb: 2 }}>
|
|
Search by meaning, not just keywords. Ask questions like "verses about hope" or "God's love for humanity"
|
|
</Alert>
|
|
|
|
<Box display="flex" gap={1} mb={3}>
|
|
<TextField
|
|
fullWidth
|
|
placeholder="What are you looking for? (e.g., 'overcoming fear')"
|
|
value={query}
|
|
onChange={(e) => setQuery(e.target.value)}
|
|
onKeyPress={(e) => e.key === 'Enter' && handleSearch()}
|
|
/>
|
|
<Button
|
|
variant="contained"
|
|
onClick={handleSearch}
|
|
disabled={searching}
|
|
startIcon={searching ? <CircularProgress size={20} /> : <SearchIcon />}
|
|
>
|
|
Search
|
|
</Button>
|
|
</Box>
|
|
|
|
{/* Results */}
|
|
{results.map(result => (
|
|
<Card key={result.id} sx={{ mb: 2 }}>
|
|
<CardContent>
|
|
<Box display="flex" justifyContent="space-between" alignItems="start">
|
|
<Box>
|
|
<Typography variant="subtitle2" color="primary" gutterBottom>
|
|
{result.book} {result.chapter}:{result.verseNum}
|
|
</Typography>
|
|
<Typography variant="body2">
|
|
{result.text}
|
|
</Typography>
|
|
</Box>
|
|
<Chip
|
|
label={`${Math.round(result.similarity * 100)}% match`}
|
|
size="small"
|
|
color={result.similarity > 0.8 ? 'success' : 'default'}
|
|
/>
|
|
</Box>
|
|
</CardContent>
|
|
</Card>
|
|
))}
|
|
</Box>
|
|
)
|
|
}
|
|
```
|
|
|
|
### 4. AI Study Question Generator
|
|
|
|
```typescript
|
|
interface Question {
|
|
id: string
|
|
type: 'comprehension' | 'application' | 'reflection' | 'analysis' | 'discussion'
|
|
question: string
|
|
difficulty: 'easy' | 'medium' | 'hard'
|
|
suggestedAnswer?: string
|
|
}
|
|
|
|
const generateStudyQuestions = async (
|
|
passage: string,
|
|
count: number = 5
|
|
): Promise<Question[]> => {
|
|
const prompt = `
|
|
Generate ${count} thoughtful study questions for the following Bible passage.
|
|
Include a mix of comprehension, application, and reflection questions.
|
|
|
|
Passage:
|
|
${passage}
|
|
|
|
Return as JSON array with format:
|
|
[
|
|
{
|
|
"type": "comprehension|application|reflection|analysis|discussion",
|
|
"question": "the question",
|
|
"difficulty": "easy|medium|hard"
|
|
}
|
|
]
|
|
`
|
|
|
|
const response = await fetch('/api/ai/generate', {
|
|
method: 'POST',
|
|
headers: { 'Content-Type': 'application/json' },
|
|
body: JSON.stringify({
|
|
prompt,
|
|
temperature: 0.7,
|
|
maxTokens: 1000
|
|
})
|
|
})
|
|
|
|
const data = await response.json()
|
|
return JSON.parse(data.response)
|
|
}
|
|
|
|
const StudyQuestionGenerator: React.FC<{
|
|
passage: string
|
|
}> = ({ passage }) => {
|
|
const [questions, setQuestions] = useState<Question[]>([])
|
|
const [generating, setGenerating] = useState(false)
|
|
|
|
const handleGenerate = async () => {
|
|
setGenerating(true)
|
|
|
|
try {
|
|
const generated = await generateStudyQuestions(passage)
|
|
setQuestions(generated)
|
|
} catch (error) {
|
|
console.error('Failed to generate questions:', error)
|
|
} finally {
|
|
setGenerating(false)
|
|
}
|
|
}
|
|
|
|
return (
|
|
<Box>
|
|
<Box display="flex" justifyContent="space-between" alignItems="center" mb={2}>
|
|
<Typography variant="h6">
|
|
Study Questions
|
|
</Typography>
|
|
<Button
|
|
variant="outlined"
|
|
onClick={handleGenerate}
|
|
disabled={generating}
|
|
startIcon={generating ? <CircularProgress size={20} /> : <AutoAwesomeIcon />}
|
|
>
|
|
Generate Questions
|
|
</Button>
|
|
</Box>
|
|
|
|
{questions.length > 0 && (
|
|
<List>
|
|
{questions.map((question, index) => (
|
|
<Card key={index} sx={{ mb: 2 }}>
|
|
<CardContent>
|
|
<Box display="flex" gap={1} mb={1}>
|
|
<Chip label={question.type} size="small" />
|
|
<Chip
|
|
label={question.difficulty}
|
|
size="small"
|
|
color={
|
|
question.difficulty === 'easy' ? 'success' :
|
|
question.difficulty === 'medium' ? 'warning' : 'error'
|
|
}
|
|
/>
|
|
</Box>
|
|
<Typography variant="body1" fontWeight="500">
|
|
{index + 1}. {question.question}
|
|
</Typography>
|
|
</CardContent>
|
|
</Card>
|
|
))}
|
|
</List>
|
|
)}
|
|
</Box>
|
|
)
|
|
}
|
|
```
|
|
|
|
### 5. Thematic Analysis
|
|
|
|
```typescript
|
|
interface Theme {
|
|
name: string
|
|
description: string
|
|
verses: VerseReference[]
|
|
relevance: number // 0-1
|
|
keywords: string[]
|
|
}
|
|
|
|
const analyzeThemes = async (verses: string[]): Promise<Theme[]> => {
|
|
const prompt = `
|
|
Analyze the following Bible verses and identify the main themes, topics, and theological concepts.
|
|
For each theme, provide:
|
|
- Name
|
|
- Description
|
|
- Keywords
|
|
- Relevance score (0-1)
|
|
|
|
Verses:
|
|
${verses.join('\n\n')}
|
|
|
|
Return as JSON array.
|
|
`
|
|
|
|
const response = await fetch('/api/ai/analyze/themes', {
|
|
method: 'POST',
|
|
headers: { 'Content-Type': 'application/json' },
|
|
body: JSON.stringify({ prompt, verses })
|
|
})
|
|
|
|
const data = await response.json()
|
|
return data.themes
|
|
}
|
|
|
|
const ThematicAnalysis: React.FC<{
|
|
book: string
|
|
chapter: number
|
|
}> = ({ book, chapter }) => {
|
|
const [themes, setThemes] = useState<Theme[]>([])
|
|
const [analyzing, setAnalyzing] = useState(false)
|
|
|
|
useEffect(() => {
|
|
performAnalysis()
|
|
}, [book, chapter])
|
|
|
|
const performAnalysis = async () => {
|
|
setAnalyzing(true)
|
|
|
|
try {
|
|
// Fetch chapter verses
|
|
const verses = await fetchChapterVerses(book, chapter)
|
|
|
|
// Analyze themes
|
|
const themes = await analyzeThemes(verses.map(v => v.text))
|
|
setThemes(themes)
|
|
} catch (error) {
|
|
console.error('Theme analysis failed:', error)
|
|
} finally {
|
|
setAnalyzing(false)
|
|
}
|
|
}
|
|
|
|
return (
|
|
<Box>
|
|
<Typography variant="h6" gutterBottom>
|
|
Thematic Analysis
|
|
</Typography>
|
|
|
|
{analyzing ? (
|
|
<Box display="flex" justifyContent="center" p={3}>
|
|
<CircularProgress />
|
|
</Box>
|
|
) : (
|
|
<Grid container spacing={2}>
|
|
{themes.map((theme, index) => (
|
|
<Grid item xs={12} sm={6} key={index}>
|
|
<Card>
|
|
<CardContent>
|
|
<Typography variant="h6" gutterBottom>
|
|
{theme.name}
|
|
</Typography>
|
|
<Typography variant="body2" color="text.secondary" paragraph>
|
|
{theme.description}
|
|
</Typography>
|
|
<Box display="flex" gap={0.5} flexWrap="wrap" mb={1}>
|
|
{theme.keywords.map(keyword => (
|
|
<Chip key={keyword} label={keyword} size="small" />
|
|
))}
|
|
</Box>
|
|
<LinearProgress
|
|
variant="determinate"
|
|
value={theme.relevance * 100}
|
|
sx={{ mt: 1 }}
|
|
/>
|
|
<Typography variant="caption" color="text.secondary">
|
|
Relevance: {Math.round(theme.relevance * 100)}%
|
|
</Typography>
|
|
</CardContent>
|
|
</Card>
|
|
</Grid>
|
|
))}
|
|
</Grid>
|
|
)}
|
|
</Box>
|
|
)
|
|
}
|
|
```
|
|
|
|
### 6. RAG (Retrieval Augmented Generation)
|
|
|
|
```typescript
|
|
// RAG implementation for contextual AI responses
|
|
const ragQuery = async (question: string, context: string[]): Promise<string> => {
|
|
// Step 1: Find relevant verses using semantic search
|
|
const relevantVerses = await semanticSearch(question)
|
|
|
|
// Step 2: Build context from retrieved verses
|
|
const contextText = relevantVerses
|
|
.slice(0, 5)
|
|
.map(v => `${v.book} ${v.chapter}:${v.verseNum} - ${v.text}`)
|
|
.join('\n\n')
|
|
|
|
// Step 3: Generate response with context
|
|
const prompt = `
|
|
You are a Bible study assistant. Answer the following question using ONLY the provided Scripture context.
|
|
Be accurate and cite specific verses.
|
|
|
|
Context:
|
|
${contextText}
|
|
|
|
Question: ${question}
|
|
|
|
Answer:
|
|
`
|
|
|
|
const response = await fetch('/api/ai/generate', {
|
|
method: 'POST',
|
|
headers: { 'Content-Type': 'application/json' },
|
|
body: JSON.stringify({
|
|
prompt,
|
|
temperature: 0.3, // Lower temperature for accuracy
|
|
maxTokens: 500
|
|
})
|
|
})
|
|
|
|
const data = await response.json()
|
|
return data.response
|
|
}
|
|
|
|
const RAGChatbot: React.FC = () => {
|
|
const [messages, setMessages] = useState<Array<{ role: 'user' | 'assistant', content: string }>>([])
|
|
const [input, setInput] = useState('')
|
|
const [thinking, setThinking] = useState(false)
|
|
|
|
const handleSend = async () => {
|
|
if (!input.trim()) return
|
|
|
|
const userMessage = { role: 'user' as const, content: input }
|
|
setMessages(prev => [...prev, userMessage])
|
|
setInput('')
|
|
setThinking(true)
|
|
|
|
try {
|
|
const answer = await ragQuery(input, [])
|
|
setMessages(prev => [...prev, { role: 'assistant', content: answer }])
|
|
} catch (error) {
|
|
console.error('RAG query failed:', error)
|
|
} finally {
|
|
setThinking(false)
|
|
}
|
|
}
|
|
|
|
return (
|
|
<Box>
|
|
<Typography variant="h6" gutterBottom>
|
|
Ask the Bible
|
|
</Typography>
|
|
|
|
<Paper sx={{ height: 400, overflow: 'auto', p: 2, mb: 2 }}>
|
|
{messages.map((msg, index) => (
|
|
<Box
|
|
key={index}
|
|
sx={{
|
|
mb: 2,
|
|
display: 'flex',
|
|
justifyContent: msg.role === 'user' ? 'flex-end' : 'flex-start'
|
|
}}
|
|
>
|
|
<Paper
|
|
sx={{
|
|
p: 2,
|
|
maxWidth: '70%',
|
|
bgcolor: msg.role === 'user' ? 'primary.main' : 'grey.200',
|
|
color: msg.role === 'user' ? 'white' : 'text.primary'
|
|
}}
|
|
>
|
|
<Typography variant="body2">{msg.content}</Typography>
|
|
</Paper>
|
|
</Box>
|
|
))}
|
|
|
|
{thinking && (
|
|
<Box display="flex" gap={1} alignItems="center">
|
|
<CircularProgress size={20} />
|
|
<Typography variant="caption">Thinking...</Typography>
|
|
</Box>
|
|
)}
|
|
</Paper>
|
|
|
|
<Box display="flex" gap={1}>
|
|
<TextField
|
|
fullWidth
|
|
placeholder="Ask a question about the Bible..."
|
|
value={input}
|
|
onChange={(e) => setInput(e.target.value)}
|
|
onKeyPress={(e) => e.key === 'Enter' && handleSend()}
|
|
/>
|
|
<Button variant="contained" onClick={handleSend} disabled={thinking}>
|
|
Send
|
|
</Button>
|
|
</Box>
|
|
</Box>
|
|
)
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## 🗄️ Database Schema
|
|
|
|
```prisma
|
|
model BibleVerse {
|
|
// ... existing fields
|
|
embedding Float[]? @db.Vector(1536) // For semantic search
|
|
embeddedAt DateTime?
|
|
}
|
|
|
|
model AISuggestion {
|
|
id String @id @default(cuid())
|
|
userId String
|
|
user User @relation(fields: [userId], references: [id])
|
|
|
|
sourceVerse String // book:chapter:verse
|
|
targetVerse String
|
|
type String // related, thematic, contextual, etc.
|
|
reason String
|
|
relevance Float
|
|
|
|
clicked Boolean @default(false)
|
|
helpful Boolean?
|
|
|
|
createdAt DateTime @default(now())
|
|
|
|
@@index([userId, sourceVerse])
|
|
}
|
|
|
|
model AICache {
|
|
id String @id @default(cuid())
|
|
query String @unique
|
|
response Json
|
|
provider String
|
|
model String
|
|
tokens Int
|
|
createdAt DateTime @default(now())
|
|
expiresAt DateTime
|
|
|
|
@@index([query])
|
|
@@index([expiresAt])
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## 📅 Implementation Timeline
|
|
|
|
### Phase 1: Foundation (Week 1-2)
|
|
- [ ] Set up AI provider integration
|
|
- [ ] Implement vector embeddings
|
|
- [ ] Build semantic search
|
|
- [ ] Create caching layer
|
|
|
|
### Phase 2: Features (Week 3-4)
|
|
- [ ] Smart suggestions engine
|
|
- [ ] Question generator
|
|
- [ ] Thematic analysis
|
|
- [ ] RAG chatbot
|
|
|
|
### Phase 3: Optimization (Week 5-6)
|
|
- [ ] Performance tuning
|
|
- [ ] Cost optimization
|
|
- [ ] A/B testing
|
|
- [ ] User feedback loop
|
|
|
|
---
|
|
|
|
## 💰 Cost Considerations
|
|
|
|
### OpenAI Pricing (estimated)
|
|
- GPT-4: $0.03/1K input tokens, $0.06/1K output
|
|
- GPT-3.5-turbo: $0.0005/1K tokens
|
|
- Embeddings: $0.0001/1K tokens
|
|
|
|
### Monthly estimates for 10,000 active users:
|
|
- Embeddings (one-time): ~$50
|
|
- Suggestions (10/user/month): ~$150
|
|
- Semantic search (50/user/month): ~$25
|
|
- Questions (5/user/month): ~$200
|
|
- **Total**: ~$425/month
|
|
|
|
### Cost Optimization:
|
|
- Cache all responses (reduce by 60%)
|
|
- Use GPT-3.5 where possible
|
|
- Rate limiting per user
|
|
- Consider self-hosted Ollama for basic tasks
|
|
|
|
---
|
|
|
|
**Document Version:** 1.0
|
|
**Last Updated:** 2025-10-13
|
|
**Status:** Ready for Implementation
|