Search
Code3,282
```### OpenAIDo not use val.town std library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsxTypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.Key ComponentsMessage Type: Defines the structure for chat messages (role and content).ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.UsageUse ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.Val Town/Platform NotesUses Val Town’s standard SQLite API for persistent storage.Designed for server-side use (no browser-specific code).No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.### Email
---description: You can use openai-client when integrating vals to an LLMglobs: alwaysApply: false---TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.Key ComponentsMessage Type: Defines the structure for chat messages (role and content).ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.UsageUse ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.Val Town/Platform NotesUses Val Town’s standard SQLite API for persistent storage.Designed for server-side use (no browser-specific code).No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
import { createClient } from "https://esm.sh/@supabase/supabase-js@2.39.3";import { Hono } from "https://esm.sh/hono@3.11.7";import { OpenAI } from "https://esm.sh/openai@4.28.0";import { Resend } from "https://esm.sh/resend@3.2.0";import { email } from "https://esm.town/v/std/email";const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_KEY);// OpenAI configurationconst OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY") || "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";const openai = new OpenAI({ apiKey: OPENAI_API_KEY,});}// Function to summarize transcript using OpenAIasync function summarizeTranscript(text: string) { try { const completion = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [ const summary = completion.choices[0]?.message?.content; if (!summary) { throw new Error("No summary generated by OpenAI"); } console.log("Transcript summarized by OpenAI"); console.log("OpenAI completion ID:", completion.id); return { }; } catch (error) { console.error("Failed to summarize transcript with OpenAI:", error); throw error; } }}async function saveFinalReport(summary: string, email: string, openaiThreadId: string) { try { const { data, error } = await supabase body: summary, email: email, openai_thread_id: openaiThreadId, }, ]) } // Summarize transcript with OpenAI and save to final_reports try { console.log("Starting OpenAI summarization..."); const summaryResult = await summarizeTranscript(body.text);
const html = await fetchText( "https://en.wikipedia.org/wiki/OpenAI",);const $ = load(html);
import { OpenAI } from "https://esm.town/v/std/openai";// --- TYPE DEFINITIONS --- try { if (req.method === "POST") { const openai = new OpenAI(); const body = await req.json(); switch (action) { case "suggestHabit": { const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [ } case "suggestHabitSet": { const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [ } case "suggestIcons": { const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [
- Send transcript content via email to multiple recipients- Save all transcripts to Supabase database for persistence- Generate AI-powered summaries using OpenAI GPT-4o-mini- Save summaries to final reports table- Generate secure access tokens for each report1. **Email Delivery** - Sends the transcript to configured recipients2. **Transcript Storage** - Saves the original transcript to the `transcripts` table3. **AI Summarization** - Uses OpenAI GPT-4o-mini to generate a professional summary4. **Final Report Storage** - Saves the AI-generated summary to the `final_reports` table5. **Token Generation** - Creates a secure access token in the `pricing_wizard_report_tokens` table3. **Configure environment variables** (optional - falls back to hardcoded keys): - `SUPABASE_SERVICE_KEY` - `OPENAI_API_KEY` - `RESEND_API_KEY`4. **Test the API** with a sample message- **Body:** The AI-generated summary of the transcript- **Email:** The email address of the person who submitted the original transcript- **OpenAI Thread ID:** Unique identifier from OpenAI for the completion request- **ID:** UUID primary key (automatically generated)- **Created At:** Timestamp of summary creation (automatically set by database) id UUID PRIMARY KEY DEFAULT gen_random_uuid(), email TEXT NOT NULL, openai_thread_id TEXT NOT NULL, body TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW(),## AI SummarizationThe API uses OpenAI's GPT-4o-mini model to generate professional summaries of transcripts. The AI is prompted to:- Focus on key points and decisions made- Identify action items and important detailsEach AI-generated summary is associated with:- The original submitter's email address- The unique OpenAI completion ID for traceability### AI Configuration- **Max Tokens:** 1000- **Temperature:** 0.3 (for consistent, focused summaries)- **API Key:** Configured via environment variable `OPENAI_API_KEY`- **Completion Tracking:** Each summary includes the OpenAI completion ID for audit purposes### Database Configuration- **Supabase Project ID:** ffilnpatwtlzjrfbmvxk- **Supabase Service Role Key:** Configured via environment variable `SUPABASE_SERVICE_KEY`- **OpenAI API Key:** Configured via environment variable `OPENAI_API_KEY`### Error Handling
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
import { OpenAI } from "https://esm.town/v/std/openai";// --- TYPE DEFINITIONS ---}export default async function(req: Request): Promise<Response> { const openai = new OpenAI(); const url = new URL(req.url); const CORS_HEADERS = { case "synthesizeProject": { const synthesisContent = `Current Date: ${new Date().toISOString().split("T")[0]}\n\nGoal: ${body.goal}`; const completion = await openai.chat.completions.create({ model, messages: [{ role: "system", content: PROJECT_SYNTHESIS_PROMPT }, { JSON.stringify(body.tasks, null, 2) }`; const completion = await openai.chat.completions.create({ model, messages: [{ role: "system", content: DAILY_REBALANCE_PROMPT }, { conversation.unshift(contextMessage); } const completion = await openai.chat.completions.create({ model, messages: [{ role: "system", content: CHAT_PROMPT }, ...conversation],
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found