Search

3,374 results found for openai (3179ms)

Code
3,279

import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
});
}
const openai = new OpenAI();
try {
}
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
});
}
const openai = new OpenAI();
try {
}
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
const openCageKey = Deno.env.get("OPENCAGE_API_KEY");
const openAIKey = Deno.env.get("OPENAI_API_KEY");
if (!openCageKey || !openAIKey) {
return new Response(
JSON.stringify({ error: "API keys not set in environment variables." }),
const summary = wikiData.extract || "A place of indescribable intrigue.";
// Prepare prompt for OpenAI
const prompt =
`You are Richard Ayoade from Travel Man. Write a short, dry, funny fact about this place:\
// Call OpenAI API for witty fact
const openaiResp = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
Authorization: `Bearer ${openAIKey}`,
"Content-Type": "application/json",
},
});
if (!openaiResp.ok) {
const errorText = await openaiResp.text();
return new Response(
JSON.stringify({ error: `OpenAI API error: ${errorText}` }),
{ status: 500, headers: { "Content-Type": "application/json" } },
);
}
const aiData = await openaiResp.json();
const wittyFact = aiData.choices?.[0]?.message?.content || "I couldn’t think of anything wit
```
### OpenAI
library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx and use
## GreenPTClient
```tsx
import { GreenPTClient } from "https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx";
const client = GreenPTClient("green-l");
---
description: You can use openai-client when integrating vals to an LLM
globs:
alwaysApply: false
---
TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiti
Key Components
Message Type: Defines the structure for chat messages (role and content).
ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method.
GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI
GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of reques
ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database a
Usage
Use ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.
Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat
Val Town/Platform Notes
Uses Val Town’s standard SQLite API for persistent storage.
Designed for server-side use (no browser-specific code).
No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
const app = new Hono();
const GPT_WRAPPER_URL = 'https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx';
export default async (req: Request) => {
```
### OpenAI
library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx and use
## GreenPTClient
```tsx
import { GreenPTClient } from "https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx";
const client = GreenPTClient("green-l");
---
description: You can use openai-client when integrating vals to an LLM
globs:
alwaysApply: false
---
TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiti
Key Components
Message Type: Defines the structure for chat messages (role and content).
ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method.
GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI
GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of reques
ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database a
Usage
Use ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.
Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat
Val Town/Platform Notes
Uses Val Town’s standard SQLite API for persistent storage.
Designed for server-side use (no browser-specific code).
No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },