Telegram Bot with In-Memory Queue

This Telegram bot uses an in-memory queue system to handle chat messages efficiently, ensuring fast webhook responses while processing AI-generated replies asynchronously.

Architecture

Fast Webhook Response

  • Webhook handler receives messages and immediately adds them to an in-memory queue
  • Returns quickly to avoid Telegram webhook timeouts
  • Shows "typing" indicator to acknowledge receipt

Asynchronous Processing

  • Event-driven queue processor (no wasteful intervals!)
  • Processing is triggered immediately when messages are added to queue
  • Processes messages one by one with AI text generation
  • Automatically continues processing if more messages are queued
  • Sends responses back to Telegram using the Bot API
  • Includes error handling and rate limiting

Queue Structure

interface UserMessage { chatId: number; // Telegram chat ID messageId: number; // Original message ID for replies text: string; // User's message text timestamp: number; // When message was queued }

Key Features

  1. Fast Response: Webhook completes in milliseconds
  2. Reliable Processing: Messages are queued and processed reliably
  3. Error Handling: Failed messages get error responses
  4. Rate Limiting: Small delays between processing to avoid API limits
  5. Queue Overflow Protection: Limits queue size to prevent memory issues
  6. Monitoring: Console logs and status endpoints for monitoring
  7. Statistics Tracking: Tracks processed messages and error counts

Monitoring Endpoints

  • GET / - HTML status page with queue information
  • GET /status - JSON status endpoint with detailed metrics

Status Response

{ "queueLength": 0, "isProcessing": false, "processedCount": 15, "errorCount": 1, "maxQueueSize": 100, "timestamp": "2025-09-23T06:28:48.639Z" }

Environment Variables

  • TELEGRAM_TOKEN: Your Telegram bot token
  • OPENAI_API_KEY: OpenAI API key for text generation

How It Works

  1. User sends message to Telegram bot
  2. Telegram webhook calls our endpoint
  3. Message is added to queue, webhook responds immediately
  4. Queue processor is triggered immediately (event-driven)
  5. AI generates response using OpenAI
  6. Response is sent back to user via Telegram API
  7. If more messages are queued, processing continues automatically

This pattern ensures your middleware stays fast while allowing complex processing to happen asynchronously. No wasteful intervals - processing only happens when needed!