jarvis
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
index.ts
https://colebemis--2cb5d574981c11f0b46e0224a6c84d84.web.val.run
This Telegram bot uses an in-memory queue system to handle chat messages efficiently, ensuring fast webhook responses while processing AI-generated replies asynchronously.
- Webhook handler receives messages and immediately adds them to an in-memory queue
- Returns quickly to avoid Telegram webhook timeouts
- Shows "typing" indicator to acknowledge receipt
- Event-driven queue processor (no wasteful intervals!)
- Processing is triggered immediately when messages are added to queue
- Processes messages one by one with AI text generation
- Automatically continues processing if more messages are queued
- Sends responses back to Telegram using the Bot API
- Includes error handling and rate limiting
interface UserMessage {
chatId: number; // Telegram chat ID
messageId: number; // Original message ID for replies
text: string; // User's message text
timestamp: number; // When message was queued
}
- Fast Response: Webhook completes in milliseconds
- Reliable Processing: Messages are queued and processed reliably
- Error Handling: Failed messages get error responses
- Rate Limiting: Small delays between processing to avoid API limits
- Queue Overflow Protection: Limits queue size to prevent memory issues
- Monitoring: Console logs and status endpoints for monitoring
- Statistics Tracking: Tracks processed messages and error counts
GET /- HTML status page with queue informationGET /status- JSON status endpoint with detailed metrics
{ "queueLength": 0, "isProcessing": false, "processedCount": 15, "errorCount": 1, "maxQueueSize": 100, "timestamp": "2025-09-23T06:28:48.639Z" }
TELEGRAM_TOKEN: Your Telegram bot tokenOPENAI_API_KEY: OpenAI API key for text generation
- User sends message to Telegram bot
- Telegram webhook calls our endpoint
- Message is added to queue, webhook responds immediately
- Queue processor is triggered immediately (event-driven)
- AI generates response using OpenAI
- Response is sent back to user via Telegram API
- If more messages are queued, processing continues automatically
This pattern ensures your middleware stays fast while allowing complex processing to happen asynchronously. No wasteful intervals - processing only happens when needed!
