ai_comments_to_tasks
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Viewing readonly version of main branch: v89View latest version
An advanced web application that analyzes long PR messages using AI with a streamlined concurrent processing workflow.
- Token Measurement: Precisely count tokens in the input text using gpt-tokenizer
- Configurable Chunking: Split text into chunks based on user-defined max tokens per chunk
- Concurrent Analysis: Send all chunks simultaneously to separate LLM calls for maximum speed
- Summary Generation: Create an executive summary from all chunk analyses
- Complete Report: Return summary + all individual chunk responses in markdown format
- Token-Aware Processing: Precise token counting and configurable chunk sizes (100-4000 tokens)
- Concurrent LLM Calls: All chunks processed simultaneously for maximum speed
- Executive Summary: AI-generated high-level summary from all analyses
- Complete Transparency: See original content and token count for each chunk
- Configurable Response Length: Adjust how detailed each analysis can be (100-2000 tokens)
- No Content Filtering: AI analyzes every detail without omitting any information
- Full Markdown Export: Download complete analysis report as formatted markdown
- Input Processing: Measure total tokens in the PR conversation text
- Smart Chunking: Split text into chunks based on user-specified token limits
- Concurrent Analysis: Process all chunks simultaneously with specialized prompts
- Summary Generation: Create comprehensive summary from all chunk analyses
- Report Assembly: Combine summary with detailed chunk analyses
- Concurrent Processing: All chunks analyzed simultaneously (no sequential processing)
- Configurable Limits: User controls chunk size and response detail level
- Precise Token Counting: Accurate measurement ensures optimal chunk sizes
- No Content Loss: Every piece of information is preserved and analyzed
- Fast Results: Concurrent processing dramatically reduces analysis time
backend/index.ts
- Streamlined Hono server with concurrent chunk processingfrontend/index.html
- Main HTML interfacefrontend/index.tsx
- React frontend with token configuration and results displayshared/types.ts
- Simplified TypeScript types for the new workflow
- Copy the entire conversation from your GitHub PR page
- Paste the text into the textarea
- Configure token limits:
- Max Tokens Per Chunk: How much content each AI call receives (100-4000)
- Max Tokens Per Response: How detailed each analysis can be (100-2000)
- Click "Analyze PR Messages"
- Get comprehensive analysis with:
- Token count and chunking statistics
- Executive summary of the entire PR
- Detailed analysis for each chunk
- Full markdown report for download
- Backend: Hono + Val Town OpenAI (GPT-4o-mini)
- Frontend: React + TailwindCSS with markdown rendering
- Storage: None (stateless analysis)
- Tokenization: gpt-tokenizer for precise token counting
- Processing: Concurrent LLM calls for maximum speed
- ✅ Precise token measurement and configurable chunking
- ✅ Concurrent processing of all chunks simultaneously
- ✅ Executive summary generation from all analyses
- ✅ Complete transparency with original content visibility
- ✅ Configurable response detail levels
- ✅ Full markdown report export
- ✅ No content filtering - every detail preserved
- ✅ Fast results through concurrent processing
- ✅ User-controlled token limits for optimal performance
- ✅ Error handling and validation
- Speed: Concurrent processing eliminates sequential bottlenecks
- Flexibility: User controls chunk size and response detail
- Completeness: No information is filtered out or lost
- Transparency: See exactly how text was chunked and analyzed
- Efficiency: Optimal token usage based on user preferences
- Export: Complete markdown report for documentation