Search
Code3,172
# hello-realtimeHello Realtime is a OpenAI Realtime app that supports both WebRTC and SIP(telephone) users. You can access the app via WebRTC athttps://hello-realtime.val.run, or via SIP by calling 425-800-0042.server-side websocket interface.If you remix the app, you'll just need to pop in your own OPENAI_API_KEY (fromplatform.openai.com), and if you want SIP, the OPENAI_SIGNING_SECRET.## Architecture1. **WebRTC Flow**: Browser connects to frontend → creates WebRTC offer → `/rtc` endpoint handles SDP negotiation with OpenAI → observer monitors session2. **SIP Flow**: Phone calls trigger webhook → `/sip` endpoint verifies and accepts call → observer monitors session3. **Monitoring**: Observer establishes WebSocket connection to OpenAI for real-time session logging routing and serves the frontend- [`routes/observer.ts`](./routes/observer.ts) - WebSocket observer that connects to OpenAI's realtime API for session monitoring and logging- [`routes/rtc.ts`](./routes/rtc.ts) - WebRTC endpoint that handles SDP offer/answer negotiation with OpenAI's realtime API- [`routes/sip.ts`](./routes/sip.ts) - SIP webhook endpoint that verifies and accepts incoming phone calls via OpenAI's telephony integration- [`routes/utils.ts`](./routes/utils.ts) - Shared utilities including OpenAI API configuration, session setup, and voice agent instructions
observer.post("/:callId", async (c) => { const callId = c.req.param("callId"); const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`; const ws = new WebSocket(url, { headers: makeHeaders() }); ws.on("open", () => {
<meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>OpenAI Realtime API Voice Agent</title> <style> :root {
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");if (!OPENAI_API_KEY) { throw new Error("🔴 OpenAI API key not configured");}export function makeHeaders(contentType?: string) { const obj: Record<string, string> = { Authorization: `Bearer ${OPENAI_API_KEY}`, }; if (contentType) obj["Content-Type"] = contentType;
}); const realtimeUrl = "https://api.openai.com/v1/realtime/calls"; const formData = new FormData(); formData.set("sdp", await c.req.text());
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");if (!OPENAI_API_KEY) { throw new Error("🔴 OpenAI API key not configured");}export function makeHeaders(contentType?: string) { const obj: Record<string, string> = { Authorization: `Bearer ${OPENAI_API_KEY}`, }; if (contentType) obj["Content-Type"] = contentType;
}); const realtimeUrl = "https://api.openai.com/v1/realtime/calls"; const formData = new FormData(); formData.set("sdp", await c.req.text());
})) .pipeThrough(new TransformStream({ transform: toOpenAiStream, flush: toOpenAiStreamFlush, streamIncludeUsage: req.stream_options?.include_usage, model, id, last: [], return "data: " + JSON.stringify(obj) + delimiter;};function toOpenAiStream (line, controller) { let data; try { this.last[cand.index] = obj;}function toOpenAiStreamFlush (controller) { if (this.last.length > 0) { for (const obj of this.last) {
<meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>OpenAI Realtime API Voice Agent</title> <style> :root {
// Create the call. const url = "https://api.openai.com/v1/realtime/calls"; const headers = makeHeaders(); const fd = new FormData();
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found