• Townie
    AI
  • Blog
  • Docs
  • Pricing
Log inSign up
colel

colel

slack-notification-triage-webhook

Public
Like
slack-notification-triage-webhook
Home
Code
16
.vtignore
AGENTS.md
CONFIGURATION.ts
LINEAR_ISSUE_CREATED_EXAMPLE.txt
README.md
H
ai-model-test.ts
ai-service.ts
citation-context.ts
data-sanitization.ts
deno.json
H
linear-webhook.ts
main.tsx
notification-triage.ts
H
sentry-webhook.ts
stringify-utils.ts
timing-utils.ts
Branches
1
Pull requests
Remixes
History
Environment variables
2
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
README.md
Code
/
README.md
Search
8/18/2025
README.md

Notification Triage Webhook System

An intelligent notification routing system that receives webhooks from various sources (Linear, Sentry, etc.) and uses AI to determine the most appropriate channels for delivery. The system analyzes notification content and routes messages to predefined email channels based on context, priority, and team relevance.

Core Concept

Instead of broadcasting all notifications to everyone, this system:

  1. Receives webhooks from external services
  2. Analyzes the content using AI (via OpenRouter with multiple model options)
  3. Routes notifications to appropriate channels based on:
    • Content analysis and keywords
    • Priority/severity levels
    • Team assignments
    • Predefined channel descriptions
  4. Delivers formatted, actionable messages via email

Each notification includes:

  • Clear summary of what happened
  • Relevant links in proper format
  • Context tailored to the target audience
  • Action items when applicable

AI-First Approach with OpenRouter

This system uses OpenRouter as the AI provider, giving access to multiple AI models with intelligent model selection based on notification complexity and cost optimization.

Multi-Model AI Strategy

The system automatically selects the most appropriate AI model based on the notification characteristics:

  • Fast Model (openai/gpt-oss-20b): Simple notifications with low severity and minimal description - ultra-cheap at $0.10/$0.50 per million tokens
  • Balanced Model (anthropic/claude-3.5-sonnet): Standard triage operations (default)
  • Advanced Model (openai/gpt-4o): Critical issues requiring sophisticated analysis
  • Reasoning Model (moonshotai/kimi-k2): Complex scenarios with multiple factors, excellent for tool use and coding

Available Models

ModelProviderUse CaseCost (per 1M tokens)Max TokensContext
GPT-OSS 20BOpenAIFast$0.10/$0.50131KOpen-weight MoE
Claude 3.5 SonnetAnthropicBalanced$3/$158KBest overall
GPT-4o MiniOpenAIFast$0.15/$0.6016KFallback fast
GPT-4oOpenAIAdvanced$5/$154KComplex analysis
Kimi K2 InstructMoonshotAIReasoning$1/$3131KTool use expert
Claude 3 OpusAnthropicReasoning$15/$754KMost capable
Gemini Pro 1.5GoogleBalanced$1.25/$58KFallback balanced

Debugging Philosophy

This system is built around the principle of AI transparency and debuggability. We use structured data formatting and comprehensive logging to make AI decision-making visible and debuggable.

Structured Data with LLML

We use the @zenbase/llml library to convert JavaScript objects into clean, XML-like formatted strings. This approach provides several benefits:

  • Human-readable logs: Complex webhook payloads become easy to scan and understand
  • AI-friendly format: Structured data that AI models can easily parse and reason about
  • Consistent formatting: All logged objects follow the same clear structure
  • Debugging efficiency: Quickly identify issues in webhook data or AI responses

The stringify-utils.ts Module

Our formatting utilities provide:

// Convert any object to formatted XML-like string const formatted = stringifyForLogging(webhookPayload); // Log with clear separators and labels logFormatted(payload, "Linear Webhook Received"); // Handle large objects with truncation const truncated = stringifyTruncated(largeObject, 1000);

Key Functions:

  • stringifyForLogging(obj) - Primary formatter using llml with JSON fallback
  • logFormatted(obj, label) - Adds clear separators and labels for log scanning
  • stringifyTruncated(obj, maxLength) - Prevents log overflow with large payloads

Why This Matters:

  • AI Debugging: When AI makes routing decisions, we can see exactly what data it analyzed
  • Webhook Debugging: Complex nested webhook payloads become immediately readable
  • Error Tracking: Failed AI responses are logged in structured format for analysis
  • Performance Monitoring: Easy to spot patterns in notification types and routing decisions

This structured approach to logging and data formatting makes the entire AI pipeline transparent and debuggable, which is crucial for a system that makes automated decisions about important notifications.

Project Structure

├── CONFIGURATION.ts         # Channel definitions, webhook config, AI prompts, OpenRouter models
├── ai-service.ts           # OpenRouter AI service with multi-model support
├── ai-model-test.ts        # Testing utility for AI models and performance
├── citation-context.ts      # Citation utilities for AI prompting with links
├── linear-webhook.ts        # HTTP webhook handler for Linear (with GET config page)
├── notification-triage.ts   # Shared AI triage system for all notification sources
├── sentry-webhook.ts        # HTTP webhook handler for Sentry (with GET config page)
├── stringify-utils.ts       # LLML-based object formatting utilities for AI debugging
├── main.tsx                # (Reserved for future integrations)
└── README.md               # This file

Current Features

✅ Shared Triage System

  • Webhook Handler: Secure endpoint with HMAC-SHA256 signature verification
  • Configuration Page: Visit the webhook URL in browser for setup instructions
  • Event Filtering: Skips low-value events (views, reads)
  • Rich Data Processing: Handles titles, descriptions, assignees, labels, priorities, teams
  • Async Processing: Webhooks respond immediately to prevent timeouts, processing happens asynchronously

✅ Linear Integration

  • Error Monitoring: Processes error events and issue notifications
  • Severity Mapping: Converts Sentry levels (fatal, error, warning, etc.) to standard severity
  • Stack Trace Processing: Extracts meaningful error context and location information
  • Environment Awareness: Routes based on production vs staging environments
  • Timeout Prevention: Fast response with background processing to avoid Linear's 4-retry timeout behavior

✅ OpenRouter AI Integration

  • Multi-Model Support: Access to Claude, GPT-4, Gemini, and other leading AI models
  • Intelligent Model Selection: Automatically chooses optimal model based on notification complexity
  • Cost Optimization: Uses faster, cheaper models for simple notifications
  • Fallback Support: Automatic failover to alternative models if primary fails
  • Usage Tracking: Detailed logging of token usage and estimated costs
  • Configurable Models: Easy to add new models or adjust selection criteria
  • Timeout Protection: 25-second timeout on AI requests to prevent webhook timeouts
  • Unified Interface: All webhook sources convert to standardized NotificationData format
  • Citation Context: Advanced AI prompting with proper link generation and ID replacement
  • Fallback Logic: Intelligent routing even when AI fails, based on source and priority
  • Enhanced Metadata: Tracks urgency, suggested actions, and formatted summaries

✅ Sentry Integration

  • Link Generation: Automatic creation of proper URLs and citations in AI responses
  • ID Replacement: Converts UUIDs and IDs to readable citation keys for better AI understanding
  • Structured References: Maintains mapping between citation keys and actual resources
  • Markdown Output: Generates properly formatted links in final notifications

✅ Email Delivery System

  • Multi-Channel Support: Can notify multiple channels for critical issues
  • Rich Formatting: HTML emails with structured information
  • Branded Messages: Consistent formatting with clear subject prefixes
  • Error Handling: Graceful failure handling per channel

✅ Configurable Channel System

Pre-configured channels include:

  • Engineering Critical: P0/P1 issues, outages, security incidents
  • Engineering General: Feature development, code reviews, technical debt
  • Product Team: UX issues, feature requests, business logic
  • DevOps & Infrastructure: Deployment, monitoring, performance
  • Quality Assurance: Testing, automation, bug reports
  • General Notifications: Low-priority and uncategorized items

Setup Instructions

1. Environment Variables

Set these in your Val Town environment:

  • LINEAR_WEBHOOK_SECRET - Your Linear webhook signing secret
  • SENTRY_WEBHOOK_SECRET - Your Sentry webhook client secret
  • OPENROUTER_API_KEY - Your OpenRouter API key (get from https://openrouter.ai/)

2. Configure Channels

Update email addresses in CONFIGURATION.ts to match your organization:

{ id: 'engineering-critical', name: 'Engineering Critical', email: 'your-critical-team@company.com', // Update this description: '...', // ... }

3. Linear Webhook Setup

  1. Visit your Linear webhook URL in a browser to see configuration instructions
  2. Copy the webhook URL from the configuration page
  3. In Linear: Settings → API → Webhooks → Create webhook
  4. Paste the URL and copy the signing secret
  5. Add the signing secret to Val Town as LINEAR_WEBHOOK_SECRET
  6. Select events to monitor (Issues, Comments, Projects recommended)

4. Sentry Webhook Setup

  1. Visit your Sentry webhook URL in a browser to see configuration instructions
  2. Copy the webhook URL from the configuration page
  3. In Sentry: Settings → Developer Settings → Internal Integrations
  4. Create new integration or edit existing one
  5. Add the webhook URL and copy the client secret
  6. Add the client secret to Val Town as SENTRY_WEBHOOK_SECRET
  7. Enable permissions and subscribe to Error/Issue events

5. Testing AI Models

Use the built-in testing utility to verify your OpenRouter setup:

import runTests from './ai-model-test.ts'; await runTests(); // Test all models and use cases

Or test individual components:

import { aiService } from './ai-service.ts'; import { testModel, showModelInfo } from './ai-model-test.ts'; // Show available models and pricing await showModelInfo(); // Test a specific model await testModel('anthropic/claude-3.5-sonnet', 'Hello, test message'); // Test notification triage scenario await testTriageScenario();

6. Model Configuration

Update model selection in CONFIGURATION.ts if needed:

export const AI_CONFIG = { modelSelection: { default: 'anthropic/claude-3.5-sonnet', // Change default model fast: 'openai/gpt-oss-20b', // Ultra-cheap for simple notifications advanced: 'openai/gpt-4o', // For critical analysis reasoning: 'moonshotai/kimi-k2' // For complex scenarios with tool use } };

New Models Added:

  • GPT-OSS 20B: Open-weight model with 131K context, extremely cost-effective at $0.10/$0.50 per million tokens
  • Kimi K2 Instruct: 1T parameter MoE model optimized for coding, reasoning, and tool use with 131K context

7. Testing

  • Linear: Create or update an issue to test the integration
  • Sentry: Trigger an error in your application to test the integration
  • AI Models: Use ai-model-test.ts to verify OpenRouter connectivity
  • Check Val Town logs for debugging information

How It Works

Webhook Processing Flow

  1. Receive: Linear sends webhook with HMAC signature
  2. Verify: Validate signature using signing secret
  3. Parse: Extract relevant data (title, description, priority, labels, etc.)
  4. Respond: Return 200 OK immediately to prevent timeout (Linear retries 4 times on timeout)
  5. Analyze: Send to AI via OpenRouter with intelligent model selection (async)
  6. Route: AI selects appropriate channel(s) with reasoning (async)
  7. Format: Create tailored message for target audience (async)
  8. Deliver: Send email notification(s) (async)
  9. Log: Record success/failure, token usage, and costs for debugging (async)

Timeout Prevention Strategy

The system prevents webhook timeouts through several mechanisms:

  • Immediate Response: Webhook returns 200 OK within milliseconds
  • Async Processing: AI analysis happens in background after response
  • Fast Default Model: Uses GPT-4o Mini instead of Claude 3.5 Sonnet for speed
  • Reduced Token Limits: 400 tokens max instead of 800 for faster generation
  • AI Request Timeout: 25-second timeout on AI calls to prevent hanging
  • Optimized Model Selection: Uses fast models for critical issues to ensure quick processing

AI Model Selection Logic

The system automatically chooses the best model for each notification, optimized for speed to prevent timeouts:

// Simple notification → Ultra-fast model (GPT-OSS 20B) if (severity === 'low' && !description) useCase = 'fast'; // Critical issue → Fast model (GPT-OSS 20B) - changed from advanced for speed else if (severity === 'critical' || priority === 1) useCase = 'fast'; // Complex scenario → Balanced model (GPT-4o Mini) - changed from reasoning for speed else if (labels.length > 3 || description.length > 500) useCase = 'balanced'; // Default → Balanced model (GPT-4o Mini) - changed from Claude for speed else useCase = 'balanced';

OpenRouter Provider Configuration

All requests are configured to use the Groq provider through OpenRouter for optimal pricing and performance:

provider: { order: ["groq"] // Forces routing through Groq for better prices }

AI Triage Logic

The AI considers:

  • Keywords: Matches against channel-specific keywords
  • Priority: Routes high-priority items to critical channels
  • Team Context: Considers team assignments and technical domains
  • Content Analysis: Analyzes descriptions for technical vs. product issues
  • Multi-Channel Logic: Can notify multiple channels for critical issues

Channel Selection Criteria

Each channel has:

  • Description: Detailed explanation of what belongs there
  • Keywords: Specific terms that indicate relevance
  • Priority Level: High/medium/low for urgency-based routing
  • Email Address: Delivery target

Future Roadmap

🚧 Planned Integrations

  • GitHub: PR reviews, security alerts, deployment status
  • Slack: Direct Slack channel delivery (alternative to email)
  • PagerDuty: Incident management integration
  • Custom Webhooks: Generic webhook handler for other services

🚧 Enhanced Features

  • Smart Scheduling: Respect time zones and on-call schedules
  • Escalation Logic: Auto-escalate if no response within timeframe
  • Analytics Dashboard: Track notification patterns and effectiveness
  • Custom Rules: User-defined routing rules beyond AI
  • Digest Mode: Batch low-priority notifications

Contributing

When adding new integrations:

  1. Create a new webhook handler file (e.g., sentry-webhook.ts)
  2. Convert the source payload to NotificationData format in your handler
  3. Call processNotification(notificationData) from the shared triage system
  4. Include GET endpoint for configuration instructions
  5. Add any source-specific configuration to CONFIGURATION.ts
  6. Update this README with new features

Adding New Webhook Sources

The shared triage system expects all notifications in this standardized format:

interface NotificationData { source: string; // e.g., 'linear', 'sentry', 'billing' id: string; // unique identifier type: string; // e.g., 'issue', 'error', 'payment_failed' action: string; // e.g., 'created', 'updated', 'resolved' title: string; description?: string; priority?: number | string; severity?: 'low' | 'medium' | 'high' | 'critical'; labels?: string[]; team?: string; assignee?: { name: string; email?: string }; url?: string; // ... additional fields }

The system will automatically:

  • Generate citation contexts for better AI prompting
  • Create proper links in notifications
  • Apply fallback routing logic
  • Format emails consistently across all sources

Debugging

  • Webhook Logs: Check Val Town logs for processing details
  • Structured Logging: All webhook payloads and AI responses are logged using LLML formatting for easy reading
  • AI Transparency: Triage reasoning is logged with clear separators (e.g., "=== AI Triage Result ===")
  • Email Delivery: Individual channel delivery status is logged
  • Signature Verification: Failed authentications are logged with details
  • LLML Formatting: Complex objects are automatically formatted as readable XML-like structures

Example Log Output

=== Linear Webhook Payload ===
<LinearWebhookPayload>
  <action>create</action>
  <type>Issue</type>
  <data>
    <title>Critical payment bug</title>
    <priority>1</priority>
    <labels>
      <item><name>critical</name></item>
    </labels>
  </data>
</LinearWebhookPayload>
=== End Linear Webhook Payload ===

=== AI Triage Result ===
<TriageResult>
  <selectedChannels>
    <item>engineering-critical</item>
  </selectedChannels>
  <reasoning>High priority payment issue requires immediate engineering attention</reasoning>
</TriageResult>
=== End AI Triage Result ===
FeaturesVersion controlCode intelligenceCLI
Use cases
TeamsAI agentsSlackGTM
ExploreDocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareersBrandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Terms of usePrivacy policyAbuse contact
© 2025 Val Town, Inc.