• Townie
    AI
  • Blog
  • Docs
  • Pricing
  • We’re hiring!
Log inSign up
arthrod

arthrod

ai_comments_to_tasks

Public
Like
ai_comments_to_tasks
Home
Code
4
backend
1
frontend
2
shared
1
README.md
Branches
1
Pull requests
Remixes
History
Environment variables
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
README.md
Code
/
README.md
Search
6/8/2025
Viewing readonly version of main branch: v92
View latest version
README.md

PR Message Analyzer

An advanced web application that analyzes long PR messages using AI with a streamlined concurrent processing workflow.

New Workflow

  1. Token Measurement: Precisely count tokens in the input text using gpt-tokenizer
  2. Configurable Chunking: Split text into chunks based on user-defined max tokens per chunk
  3. Concurrent Analysis: Send all chunks simultaneously to separate LLM calls for maximum speed
  4. Summary Generation: Create an executive summary from all chunk analyses
  5. Complete Report: Return summary + all individual chunk responses in markdown format

Enhanced Analysis Format

The AI now provides opinionated technical analysis with a structured format:

  1. Specific Issues, Concerns, or Problems Mentioned: Complete extraction of all issues
  2. Why Issues Matter & Agreement with Solutions: Technical reasoning for why each issue is important and whether the proposed solutions are adequate
  3. Code Changes with Suggestions: Original code vs. corrected code with AI's additional improvements
  4. What's Missing & Deviation Recommendations: Critical analysis of what's missing and why developers should consider alternative approaches

This format provides not just summarization, but actual technical insights and code improvements.

Analysis Strategy

  1. Input Processing: Measure total tokens in the PR conversation text
  2. Smart Chunking: Split text into chunks based on user-specified token limits
  3. Concurrent Analysis: Process all chunks simultaneously with specialized prompts
  4. Summary Generation: Create comprehensive summary from all chunk analyses
  5. Report Assembly: Combine summary with detailed chunk analyses

Performance Improvements

  • Concurrent Processing: All chunks analyzed simultaneously (no sequential processing)
  • Configurable Limits: User controls chunk size and response detail level
  • Precise Token Counting: Accurate measurement ensures optimal chunk sizes
  • No Content Loss: Every piece of information is preserved and analyzed
  • Fast Results: Concurrent processing dramatically reduces analysis time

Structure

  • backend/index.ts - Streamlined Hono server with concurrent chunk processing
  • frontend/index.html - Main HTML interface
  • frontend/index.tsx - React frontend with token configuration and results display
  • shared/types.ts - Simplified TypeScript types for the new workflow

Usage

  1. Copy the entire conversation from your GitHub PR page
  2. Paste the text into the textarea
  3. Configure token limits:
    • Max Tokens Per Chunk: How much content each AI call receives (100-4000)
    • Max Tokens Per Response: How detailed each analysis can be (100-2000)
  4. Click "Analyze PR Messages"
  5. Get comprehensive analysis with:
    • Token count and chunking statistics
    • Executive summary of the entire PR
    • Detailed analysis for each chunk
    • Full markdown report for download

Tech Stack

  • Backend: Hono + Val Town OpenAI (GPT-4o-mini)
  • Frontend: React + TailwindCSS with markdown rendering
  • Storage: None (stateless analysis)
  • Tokenization: gpt-tokenizer for precise token counting
  • Processing: Concurrent LLM calls for maximum speed

Features

  • ✅ Precise token measurement and configurable chunking
  • ✅ Concurrent processing of all chunks simultaneously
  • ✅ Executive summary generation from all analyses
  • ✅ Complete transparency with original content visibility
  • ✅ Configurable response detail levels
  • ✅ Full markdown report export
  • ✅ No content filtering - every detail preserved
  • ✅ Fast results through concurrent processing
  • ✅ User-controlled token limits for optimal performance
  • ✅ Error handling and validation

Workflow Benefits

  • Speed: Concurrent processing eliminates sequential bottlenecks
  • Flexibility: User controls chunk size and response detail
  • Completeness: No information is filtered out or lost
  • Transparency: See exactly how text was chunked and analyzed
  • Efficiency: Optimal token usage based on user preferences
  • Export: Complete markdown report for documentation
FeaturesVersion controlCode intelligenceCLI
Use cases
TeamsAI agentsSlackGTM
ExploreDocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareers
We’re hiring!
Brandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Terms of usePrivacy policyAbuse contact
© 2025 Val Town, Inc.