An intelligent notification routing system that receives webhooks from various sources (Linear, Sentry, etc.) and uses AI to determine the most appropriate channels for delivery. The system analyzes notification content and routes messages to predefined email channels based on context, priority, and team relevance.
Instead of broadcasting all notifications to everyone, this system:
Each notification includes:
This system uses OpenRouter as the AI provider, giving access to multiple AI models with intelligent model selection based on notification complexity and cost optimization.
The system automatically selects the most appropriate AI model based on the notification characteristics:
openai/gpt-oss-20b
): Simple notifications with low severity and minimal description - ultra-cheap at $0.10/$0.50 per million tokensanthropic/claude-3.5-sonnet
): Standard triage operations (default)openai/gpt-4o
): Critical issues requiring sophisticated analysismoonshotai/kimi-k2
): Complex scenarios with multiple factors, excellent for tool use and codingModel | Provider | Use Case | Cost (per 1M tokens) | Max Tokens | Context |
---|---|---|---|---|---|
GPT-OSS 20B | OpenAI | Fast | $0.10/$0.50 | 131K | Open-weight MoE |
Claude 3.5 Sonnet | Anthropic | Balanced | $3/$15 | 8K | Best overall |
GPT-4o Mini | OpenAI | Fast | $0.15/$0.60 | 16K | Fallback fast |
GPT-4o | OpenAI | Advanced | $5/$15 | 4K | Complex analysis |
Kimi K2 Instruct | MoonshotAI | Reasoning | $1/$3 | 131K | Tool use expert |
Claude 3 Opus | Anthropic | Reasoning | $15/$75 | 4K | Most capable |
Gemini Pro 1.5 | Balanced | $1.25/$5 | 8K | Fallback balanced |
This system is built around the principle of AI transparency and debuggability. We use structured data formatting and comprehensive logging to make AI decision-making visible and debuggable.
We use the @zenbase/llml
library to convert JavaScript objects into clean, XML-like formatted strings. This approach provides several benefits:
Our formatting utilities provide:
// Convert any object to formatted XML-like string
const formatted = stringifyForLogging(webhookPayload);
// Log with clear separators and labels
logFormatted(payload, "Linear Webhook Received");
// Handle large objects with truncation
const truncated = stringifyTruncated(largeObject, 1000);
Key Functions:
stringifyForLogging(obj)
- Primary formatter using llml with JSON fallbacklogFormatted(obj, label)
- Adds clear separators and labels for log scanningstringifyTruncated(obj, maxLength)
- Prevents log overflow with large payloadsWhy This Matters:
This structured approach to logging and data formatting makes the entire AI pipeline transparent and debuggable, which is crucial for a system that makes automated decisions about important notifications.
├── CONFIGURATION.ts # Channel definitions, webhook config, AI prompts, OpenRouter models
├── ai-service.ts # OpenRouter AI service with multi-model support
├── ai-model-test.ts # Testing utility for AI models and performance
├── citation-context.ts # Citation utilities for AI prompting with links
├── linear-webhook.ts # HTTP webhook handler for Linear (with GET config page)
├── notification-triage.ts # Shared AI triage system for all notification sources
├── sentry-webhook.ts # HTTP webhook handler for Sentry (with GET config page)
├── stringify-utils.ts # LLML-based object formatting utilities for AI debugging
├── main.tsx # (Reserved for future integrations)
└── README.md # This file
NotificationData
formatPre-configured channels include:
Set these in your Val Town environment:
LINEAR_WEBHOOK_SECRET
- Your Linear webhook signing secretSENTRY_WEBHOOK_SECRET
- Your Sentry webhook client secretOPENROUTER_API_KEY
- Your OpenRouter API key (get from https://openrouter.ai/)Update email addresses in CONFIGURATION.ts
to match your organization:
{
id: 'engineering-critical',
name: 'Engineering Critical',
email: 'your-critical-team@company.com', // Update this
description: '...',
// ...
}
LINEAR_WEBHOOK_SECRET
SENTRY_WEBHOOK_SECRET
Use the built-in testing utility to verify your OpenRouter setup:
import runTests from './ai-model-test.ts';
await runTests(); // Test all models and use cases
Or test individual components:
import { aiService } from './ai-service.ts';
import { testModel, showModelInfo } from './ai-model-test.ts';
// Show available models and pricing
await showModelInfo();
// Test a specific model
await testModel('anthropic/claude-3.5-sonnet', 'Hello, test message');
// Test notification triage scenario
await testTriageScenario();
Update model selection in CONFIGURATION.ts
if needed:
export const AI_CONFIG = {
modelSelection: {
default: 'anthropic/claude-3.5-sonnet', // Change default model
fast: 'openai/gpt-oss-20b', // Ultra-cheap for simple notifications
advanced: 'openai/gpt-4o', // For critical analysis
reasoning: 'moonshotai/kimi-k2' // For complex scenarios with tool use
}
};
New Models Added:
ai-model-test.ts
to verify OpenRouter connectivityThe system prevents webhook timeouts through several mechanisms:
The system automatically chooses the best model for each notification, optimized for speed to prevent timeouts:
// Simple notification → Ultra-fast model (GPT-OSS 20B)
if (severity === 'low' && !description) useCase = 'fast';
// Critical issue → Fast model (GPT-OSS 20B) - changed from advanced for speed
else if (severity === 'critical' || priority === 1) useCase = 'fast';
// Complex scenario → Balanced model (GPT-4o Mini) - changed from reasoning for speed
else if (labels.length > 3 || description.length > 500) useCase = 'balanced';
// Default → Balanced model (GPT-4o Mini) - changed from Claude for speed
else useCase = 'balanced';
All requests are configured to use the Groq provider through OpenRouter for optimal pricing and performance:
provider: {
order: ["groq"] // Forces routing through Groq for better prices
}
The AI considers:
Each channel has:
When adding new integrations:
sentry-webhook.ts
)NotificationData
format in your handlerprocessNotification(notificationData)
from the shared triage systemCONFIGURATION.ts
The shared triage system expects all notifications in this standardized format:
interface NotificationData {
source: string; // e.g., 'linear', 'sentry', 'billing'
id: string; // unique identifier
type: string; // e.g., 'issue', 'error', 'payment_failed'
action: string; // e.g., 'created', 'updated', 'resolved'
title: string;
description?: string;
priority?: number | string;
severity?: 'low' | 'medium' | 'high' | 'critical';
labels?: string[];
team?: string;
assignee?: { name: string; email?: string };
url?: string;
// ... additional fields
}
The system will automatically:
=== Linear Webhook Payload ===
<LinearWebhookPayload>
<action>create</action>
<type>Issue</type>
<data>
<title>Critical payment bug</title>
<priority>1</priority>
<labels>
<item><name>critical</name></item>
</labels>
</data>
</LinearWebhookPayload>
=== End Linear Webhook Payload ===
=== AI Triage Result ===
<TriageResult>
<selectedChannels>
<item>engineering-critical</item>
</selectedChannels>
<reasoning>High priority payment issue requires immediate engineering attention</reasoning>
</TriageResult>
=== End AI Triage Result ===