Search
Code3,285
import { email } from "https://esm.town/v/std/email";import { extractValInfo } from "https://esm.town/v/stevekrouse/extractValInfo";import { OpenAI } from "npm:openai";function stripHtmlBackticks(html: string): string {export default async function(e: Email) { const openai = new OpenAI(); console.log(`from: ${e.from} to: ${e.to} subject: ${e.subject}, cc: ${e.cc}, bcc: ${e.bcc}`); } const summary = await openai.chat.completions.create({ messages: [ {
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
import { OpenAI } from "https://esm.sh/openai@4.85.1";import { GlobalRateLimiter } from "./GlobalRateLimiter.tsx";};interface ChatOpenAI { invoke(messages: Message[]): Promise<string>;}export function ChatOpenAI(model: string): ChatOpenAI { const openai = new OpenAI(); return { invoke: async (messages: Message[]): Promise<string> => { const completion = await openai.chat.completions.create({ messages: messages.map(message => ({ role: message.role as "user" | "assistant" | "system",}export function GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): ChatOpenAI { const openAi = ChatOpenAI(model); const rateLimiter = new GlobalRateLimiter(requestsPerSecond); await rateLimiter.check(); return openAi.invoke(messages); }, };
---description: You can use openai-client when integrating vals to an LLMglobs: alwaysApply: false---TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.Key ComponentsMessage Type: Defines the structure for chat messages (role and content).ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.UsageUse ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.Val Town/Platform NotesUses Val Town’s standard SQLite API for persistent storage.Designed for server-side use (no browser-specific code).No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.Do not use val.town std library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx
```### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
const keywords = [ "GPT", "OpenAI", "Transformer", "multimodal",
module.exports = [ "GPT", "OpenAI", "Transformer", "multimodal",
---description: You can use openai-client when integrating vals to an LLMglobs: alwaysApply: false---TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.Key ComponentsMessage Type: Defines the structure for chat messages (role and content).ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.UsageUse ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.Val Town/Platform NotesUses Val Town’s standard SQLite API for persistent storage.Designed for server-side use (no browser-specific code).No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
}export interface OpenAIServiceConfig { API_KEY?: string; BASE_URL: string; TAVILY: TavilyServiceConfig; FIRECRAWL: FirecrawlServiceConfig; OPENAI: OpenAIServiceConfig; RESEND: ResendServiceConfig;}
Configure the following variables in your environment: - `AGENT_API_KEY` (This is a secure token that you choose to secure the agent.tsx POST endpoint)- `OPENAI_API_KEY` (An OpenAI API Key)- `EXA_API_KEY` (Optional, though needed if you use the web search tool)
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found