Search
Code3,172
# hello-realtime**Hello Realtime** is a OpenAI Realtime app that supports both WebRTC and SIP(telephone) users. You can access the app via WebRTC at[hello-realtime.val.run](https://hello-realtime.val.run), or via SIP by calling 425-800-0042.server-side websocket interface.If you remix the app, you'll just need to pop in your own `OPENAI_API_KEY` (from[platform.openai.com](https://platform.openai.com)), and if you want SIP, the `OPENAI_SIGNING_SECRET`.## Architecture - Browser connects to frontend - creates WebRTC offer - `/rtc` endpoint handles SDP negotiation with OpenAI - observer established to monitor session2. **SIP Flow**:
observer.post("/:callId", async (c) => { const callId = c.req.param("callId"); const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`; const ws = new WebSocket(url, { headers: makeHeaders() }); ws.on("open", () => {
<meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>OpenAI Realtime API Voice Agent</title> <style> :root {
export function onModelChange(ctx) { try { localStorage.setItem('groq_model', ctx.selectedModel || 'openai/gpt-oss-120b'); } catch (_) { /* ignore */ } const isOss = !!(ctx.selectedModel && ctx.selectedModel.startsWith('openai/gpt-oss')); if (!isOss) { ctx.useBrowserSearch = false; ctx.useCodeInterpreter = false; } ctx.settingsChanged = true;
```### OpenAIDo not use val.town std library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx and use## GreenPTClient```tsximport { GreenPTClient } from "https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx";const client = GreenPTClient("green-l");
---description: You can use openai-client when integrating vals to an LLMglobs: alwaysApply: false---TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.Key ComponentsMessage Type: Defines the structure for chat messages (role and content).ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.UsageUse ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.Val Town/Platform NotesUses Val Townβs standard SQLite API for persistent storage.Designed for server-side use (no browser-specific code).No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
const MODEL = "gpt-realtime";const INSTRUCTIONS = ` Greet the user in English, and thank them for trying the new OpenAI Realtime API. Give them a brief summary based on the list below, and then ask if they have any questions. Answer questions using the information below. For questions outside this scope, - higher audio quality - improved handling of alphanumerics (eg, properly understanding credit card and phone numbers) - support for the OpenAI Prompts API - support for MCP-based tools - auto-truncation to reduce context sizeconst VOICE = "marin";const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");if (!OPENAI_API_KEY) { throw new Error("π΄ OpenAI API key not configured");}export function makeHeaders(contentType?: string) { const obj: Record<string, string> = { Authorization: `Bearer ${OPENAI_API_KEY}`, }; if (contentType) obj["Content-Type"] = contentType;
sip.post("/", async (c) => { // Verify the webhook. const OPENAI_SIGNING_SECRET = Deno.env.get("OPENAI_SIGNING_SECRET"); if (!OPENAI_SIGNING_SECRET) { console.error("π΄ webhook secret not configured"); return c.text("Internal error", 500); } const webhook = new Webhook(OPENAI_SIGNING_SECRET); const bodyStr = await c.req.text(); let callId: string | undefined; // Accept the call. const url = `https://api.openai.com/v1/realtime/calls/${callId}/accept`; const headers = makeHeaders("application/json"); const body = JSON.stringify(makeSession());
rtc.post("/", async (c) => { // Create the call. const url = "https://api.openai.com/v1/realtime/calls"; const headers = makeHeaders(); const fd = new FormData();
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found