feistyAmberRhinoceros
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
This guide covers deploying the Real-Time Data Processing System on various platforms, from local development to production environments.
- Deno: Version 1.40.0 or higher
- Git: For version control
- Node.js: Optional, for package management tools
git clone https://github.com/yourusername/real-time-data-processing-system.git cd real-time-data-processing-system
# macOS/Linux curl -fsSL https://deno.land/install.sh | sh # Windows (PowerShell) irm https://deno.land/install.ps1 | iex
# Development mode with auto-reload deno run --allow-all --watch main.tsx # Production mode deno run --allow-all main.tsx
Open your browser and navigate to:
http://localhost:8000
Visit Val Town and create an account.
- Click "New Val"
- Choose "HTTP" trigger type
- Copy the contents of
main.tsxinto the editor
Create the following file structure in Val Town:
/main.tsx # Entry point
/backend/index.ts # Main server
/backend/ingestion/eventIngestion.ts
/backend/processing/streamProcessor.ts
/backend/storage/database.ts
/backend/monitoring/metricsCollector.ts
/backend/monitoring/healthChecker.ts
/frontend/index.html
/frontend/index.tsx
/shared/types.ts
/shared/utils.ts
Click "Save" to deploy your val. Val Town will provide a public URL.
FROM denoland/deno:1.40.0 WORKDIR /app # Copy dependency files COPY . . # Cache dependencies RUN deno cache main.tsx # Expose port EXPOSE 8000 # Run the application CMD ["run", "--allow-all", "main.tsx"]
# Build the image docker build -t real-time-processor . # Run the container docker run -p 8000:8000 real-time-processor
Create docker-compose.yml:
version: '3.8' services: app: build: . ports: - "8000:8000" environment: - NODE_ENV=production volumes: - ./data:/app/data restart: unless-stopped nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl depends_on: - app restart: unless-stopped
- Install AWS CLI and configure credentials
- Create deployment package:
zip -r deployment.zip . -x "*.git*" "node_modules/*"
- Deploy using AWS CLI:
aws lambda create-function \ --function-name real-time-processor \ --runtime provided.al2 \ --role arn:aws:iam::account:role/lambda-role \ --handler main.handler \ --zip-file fileb://deployment.zip
- Push Docker image to ECR
- Create ECS task definition
- Deploy to ECS cluster
# Build and push to Container Registry gcloud builds submit --tag gcr.io/PROJECT_ID/real-time-processor # Deploy to Cloud Run gcloud run deploy real-time-processor \ --image gcr.io/PROJECT_ID/real-time-processor \ --platform managed \ --region us-central1 \ --allow-unauthenticated
# Create resource group az group create --name real-time-processor --location eastus # Deploy container az container create \ --resource-group real-time-processor \ --name real-time-processor \ --image your-registry/real-time-processor \ --ports 8000 \ --dns-name-label real-time-processor
deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: real-time-processor spec: replicas: 3 selector: matchLabels: app: real-time-processor template: metadata: labels: app: real-time-processor spec: containers: - name: app image: real-time-processor:latest ports: - containerPort: 8000 env: - name: NODE_ENV value: "production" resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m"
service.yaml:
apiVersion: v1 kind: Service metadata: name: real-time-processor-service spec: selector: app: real-time-processor ports: - port: 80 targetPort: 8000 type: LoadBalancer
kubectl apply -f deployment.yaml kubectl apply -f service.yaml
# Server Configuration PORT=8000 NODE_ENV=production # Database Configuration DATABASE_URL=sqlite:./data/events.db DATABASE_POOL_SIZE=10 # Monitoring Configuration METRICS_RETENTION_HOURS=24 HEALTH_CHECK_INTERVAL=30 # Security Configuration RATE_LIMIT_REQUESTS_PER_SECOND=10000 CORS_ORIGIN=* # Performance Configuration PROCESSING_INSTANCES=5 BATCH_SIZE=100 WINDOW_SIZE_MS=10000
Create config.json:
{ "server": { "port": 8000, "host": "0.0.0.0" }, "database": { "type": "sqlite", "path": "./data/events.db", "poolSize": 10 }, "processing": { "instances": 5, "windowSizeMs": 10000, "batchSize": 100 }, "monitoring": { "metricsRetentionHours": 24, "healthCheckIntervalSeconds": 30 } }
-- Create indexes for better query performance
CREATE INDEX idx_events_timestamp ON events(timestamp DESC);
CREATE INDEX idx_events_type_timestamp ON events(type, timestamp DESC);
CREATE INDEX idx_metrics_timestamp ON metrics(timestamp DESC);
-- Configure SQLite for better performance
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = 10000;
PRAGMA temp_store = memory;
# Increase file descriptor limits ulimit -n 65536 # Optimize network settings echo 'net.core.somaxconn = 65536' >> /etc/sysctl.conf echo 'net.ipv4.tcp_max_syn_backlog = 65536' >> /etc/sysctl.conf # Apply changes sysctl -p
The system provides several health check endpoints:
# Basic health check curl http://localhost:8000/api/health # Detailed metrics curl http://localhost:8000/api/metrics # System status curl http://localhost:8000/api/system/status
// Configure structured logging
const logger = {
level: process.env.LOG_LEVEL || 'info',
format: 'json',
transports: [
{ type: 'console' },
{ type: 'file', filename: 'app.log' }
]
};
Add Prometheus endpoint:
app.get('/metrics', (c) => {
// Return Prometheus-formatted metrics
return c.text(prometheusMetrics);
});
Import the provided Grafana dashboard configuration from monitoring/grafana-dashboard.json.
server { listen 443 ssl http2; server_name your-domain.com; ssl_certificate /path/to/certificate.crt; ssl_certificate_key /path/to/private.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512; location / { proxy_pass http://localhost:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }
# Allow only necessary ports ufw allow 22/tcp # SSH ufw allow 80/tcp # HTTP ufw allow 443/tcp # HTTPS ufw enable
# Create backup script #!/bin/bash DATE=$(date +%Y%m%d_%H%M%S) sqlite3 ./data/events.db ".backup ./backups/events_$DATE.db" # Schedule with cron 0 2 * * * /path/to/backup-script.sh
- Regular backups: Automated daily backups
- Multi-region deployment: Deploy across multiple regions
- Data replication: Real-time data replication
- Recovery procedures: Documented recovery steps
-
High Memory Usage
- Check for memory leaks in processing instances
- Adjust batch sizes and window configurations
- Monitor garbage collection
-
High Latency
- Check database query performance
- Monitor network connectivity
- Review processing instance load
-
Connection Issues
- Verify WebSocket configuration
- Check firewall settings
- Monitor connection pool usage
# Run with debug logging DEBUG=* deno run --allow-all main.tsx # Enable performance profiling deno run --allow-all --inspect main.tsx
# Monitor real-time logs tail -f app.log | grep ERROR # Analyze performance metrics grep "processing_time" app.log | awk '{sum+=$3; count++} END {print "Average:", sum/count}'