Search

3,377 results found for openai (7415ms)

Code
3,282

```
### OpenAI
library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx
TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiti
Key Components
Message Type: Defines the structure for chat messages (role and content).
ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method.
GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI
GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of reques
ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database a
Usage
Use ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.
Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat
Val Town/Platform Notes
Uses Val Town’s standard SQLite API for persistent storage.
Designed for server-side use (no browser-specific code).
No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
### Email
---
description: You can use openai-client when integrating vals to an LLM
globs:
alwaysApply: false
---
TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiti
Key Components
Message Type: Defines the structure for chat messages (role and content).
ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method.
GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI
GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of reques
ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database a
Usage
Use ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.
Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat
Val Town/Platform Notes
Uses Val Town’s standard SQLite API for persistent storage.
Designed for server-side use (no browser-specific code).
No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
import { createClient } from "https://esm.sh/@supabase/supabase-js@2.39.3";
import { Hono } from "https://esm.sh/hono@3.11.7";
import { OpenAI } from "https://esm.sh/openai@4.28.0";
import { Resend } from "https://esm.sh/resend@3.2.0";
import { email } from "https://esm.town/v/std/email";
const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_KEY);
// OpenAI configuration
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY")
|| "sk-proj-mvFlpqW-sHsNARvC5w8ZwDVLbSXoqXSjmYdndyvySuw5ieRu7K3FrFOtgs9JubvlwOk7ETk8VeT3BlbkFJ
const openai = new OpenAI({
apiKey: OPENAI_API_KEY,
});
}
// Function to summarize transcript using OpenAI
async function summarizeTranscript(text: string) {
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
const summary = completion.choices[0]?.message?.content;
if (!summary) {
throw new Error("No summary generated by OpenAI");
}
console.log("Transcript summarized by OpenAI");
console.log("OpenAI completion ID:", completion.id);
return {
};
} catch (error) {
console.error("Failed to summarize transcript with OpenAI:", error);
throw error;
}
}
}
async function saveFinalReport(summary: string, email: string, openaiThreadId: string) {
try {
const { data, error } = await supabase
body: summary,
email: email,
openai_thread_id: openaiThreadId,
},
])
}
// Summarize transcript with OpenAI and save to final_reports
try {
console.log("Starting OpenAI summarization...");
const summaryResult = await summarizeTranscript(body.text);
const html = await fetchText(
"https://en.wikipedia.org/wiki/OpenAI",
);
const $ = load(html);
import { OpenAI } from "https://esm.town/v/std/openai";
// --- TYPE DEFINITIONS ---
try {
if (req.method === "POST") {
const openai = new OpenAI();
const body = await req.json();
switch (action) {
case "suggestHabit": {
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
}
case "suggestHabitSet": {
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
}
case "suggestIcons": {
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
- Send transcript content via email to multiple recipients
- Save all transcripts to Supabase database for persistence
- Generate AI-powered summaries using OpenAI GPT-4o-mini
- Save summaries to final reports table
- Generate secure access tokens for each report
1. **Email Delivery** - Sends the transcript to configured recipients
2. **Transcript Storage** - Saves the original transcript to the `transcripts` table
3. **AI Summarization** - Uses OpenAI GPT-4o-mini to generate a professional summary
4. **Final Report Storage** - Saves the AI-generated summary to the `final_reports` table
5. **Token Generation** - Creates a secure access token in the `pricing_wizard_report_tokens` ta
3. **Configure environment variables** (optional - falls back to hardcoded keys):
- `SUPABASE_SERVICE_KEY`
- `OPENAI_API_KEY`
- `RESEND_API_KEY`
4. **Test the API** with a sample message
- **Body:** The AI-generated summary of the transcript
- **Email:** The email address of the person who submitted the original transcript
- **OpenAI Thread ID:** Unique identifier from OpenAI for the completion request
- **ID:** UUID primary key (automatically generated)
- **Created At:** Timestamp of summary creation (automatically set by database)
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL,
openai_thread_id TEXT NOT NULL,
body TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
## AI Summarization
The API uses OpenAI's GPT-4o-mini model to generate professional summaries of transcripts. The A
- Focus on key points and decisions made
- Identify action items and important details
Each AI-generated summary is associated with:
- The original submitter's email address
- The unique OpenAI completion ID for traceability
### AI Configuration
- **Max Tokens:** 1000
- **Temperature:** 0.3 (for consistent, focused summaries)
- **API Key:** Configured via environment variable `OPENAI_API_KEY`
- **Completion Tracking:** Each summary includes the OpenAI completion ID for audit purposes
### Database Configuration
- **Supabase Project ID:** ffilnpatwtlzjrfbmvxk
- **Supabase Service Role Key:** Configured via environment variable `SUPABASE_SERVICE_KEY`
- **OpenAI API Key:** Configured via environment variable `OPENAI_API_KEY`
### Error Handling
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
import { OpenAI } from "https://esm.town/v/std/openai";
// --- TYPE DEFINITIONS ---
}
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const url = new URL(req.url);
const CORS_HEADERS = {
case "synthesizeProject": {
const synthesisContent = `Current Date: ${new Date().toISOString().split("T")[0]}\n\nG
const completion = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: PROJECT_SYNTHESIS_PROMPT }, {
JSON.stringify(body.tasks, null, 2)
}`;
const completion = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: DAILY_REBALANCE_PROMPT }, {
conversation.unshift(contextMessage);
}
const completion = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: CHAT_PROMPT }, ...conversation],
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },