ai_comments_to_tasks
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Viewing readonly version of main branch: v56View latest version
An advanced web application that analyzes long PR messages using AI, with intelligent token-aware chunking, concurrent processing, and specialized prompts for different message types.
- Token-Aware Chunking: Uses tiktoken to intelligently split PR conversations into optimal chunks (max 800 tokens each)
- Concurrent Processing: Batched analysis with 3 chunks processed simultaneously for faster results
- Individual LLM Calls: Each chunk gets its own specialized AI analysis
- Chunk Classification: Automatically identifies bot reviews, user comments, code changes, and system messages
- Severity Assessment: Issues are categorized as low, medium, high, or critical
- Actionable Insights: Extracts specific code issues and action items from each message
- Proper Markdown Rendering: Clean display of formatted analysis results
- Ordered Results: Shows all chunk analyses in sequence with overall summary
- Token-Aware Chunking: Text is split using PR-specific separators, then further divided based on token limits
- Classification: Each chunk is categorized (bot_review, user_comment, code_change, system_message, mixed)
- Concurrent Analysis: Chunks are processed in batches of 3 with specialized prompts:
- Bot reviews: Focus on code issues, tool recommendations, severity
- User comments: Extract feedback, questions, approval signals
- Code changes: Assess scope, impact, complexity
- System messages: Extract relevant metadata
- Concatenation: Results are combined with an overall summary
- Tiktoken Integration: Precise token counting ensures optimal chunk sizes
- Concurrent Processing: 3x faster analysis with batched API calls
- Reduced Token Limits: Faster LLM responses with focused 500-token analysis calls
- Smart Batching: Respectful API usage with small delays between batches
backend/index.ts
- Enhanced Hono server with chunking strategy and individual LLM callsfrontend/index.html
- Main HTML interface with improved stylingfrontend/index.tsx
- React frontend with markdown rendering and structured displayshared/types.ts
- Enhanced TypeScript types for chunk analysis
- Copy the entire conversation from your GitHub PR page
- Paste the text into the textarea (include bot messages, reviews, comments)
- Click "Analyze PR Messages"
- Get comprehensive analysis with:
- Overall summary of the PR
- Individual chunk analysis in order
- Severity badges and chunk type indicators
- Specific code issues and action items
- Tool messages and recommendations
The application processes each message chunk individually with specialized AI prompts, then provides both detailed per-chunk insights and an overall assessment.
- Backend: Hono + Val Town OpenAI (GPT-4o-mini with JSON mode)
- Frontend: React + TailwindCSS with custom markdown renderer
- Storage: None (stateless analysis)
- Chunking: Advanced text splitting based on PR conversation patterns
- Analysis: Individual LLM calls per chunk + overall summary generation
- ✅ Token-aware intelligent PR text chunking with tiktoken (max 800 tokens per chunk)
- ✅ Concurrent processing with batched analysis (3 chunks at a time)
- ✅ Individual AI analysis for each chunk with specialized prompts
- ✅ Severity assessment (low/medium/high/critical)
- ✅ Code issue extraction and actionable item identification
- ✅ Proper markdown rendering with syntax highlighting
- ✅ Structured display with badges and visual indicators
- ✅ Overall summary generation from all chunk analyses
- ✅ Error handling and validation
- ✅ Ordered results showing all chunks in sequence
- ✅ Performance optimizations for faster analysis