Search

3,266 results found for ā€œopenaiā€ (2383ms)

Code
3,171

// Create the call.
const url = "https://api.openai.com/v1/realtime/calls";
const headers = makeHeaders();
const fd = new FormData();
sip.post("/", async (c) => {
// Verify the webhook.
const OPENAI_SIGNING_SECRET = Deno.env.get("OPENAI_SIGNING_SECRET");
if (!OPENAI_SIGNING_SECRET) {
console.error("šŸ”“ webhook secret not configured");
return c.text("Internal error", 500);
}
const webhook = new Webhook(OPENAI_SIGNING_SECRET);
const bodyStr = await c.req.text();
let callId: string | undefined;
// Accept the call.
const url = `https://api.openai.com/v1/realtime/calls/${callId}/accept`;
const headers = makeHeaders("application/json");
const body = JSON.stringify(makeSession());
# hello-mcp
**Hello MCP** is a remix of OpenAI's hello-realtime demo that adds Model Context Protocol (MCP)
This demo showcases:
- WebRTC-based voice conversations with OpenAI's Realtime API
- **MCP (Model Context Protocol)** for secure server-side tool execution
- A demo `getFavoriteFood` tool to demonstrate MCP functionality
- Toggle between standard and MCP-enabled modes
If you remix the app, you'll just need to pop in your own `OPENAI_API_KEY` (from
[platform.openai.com](https://platform.openai.com)).
## What's New: MCP Support
- Browser connects to frontend
- Creates WebRTC offer
- `/rtc` endpoint handles SDP negotiation with OpenAI
- Observer established to monitor session
2. **MCP-Enhanced Flow** (when enabled):
- Same WebRTC setup as above
- OpenAI discovers tools via MCP protocol (`/mcp`)
- Tools execute server-side when invoked
- Results returned through MCP protocol
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>OpenAI Realtime API Voice Agent</title>
<style>
:root {
import { fetchText } from "https://esm.town/v/stevekrouse/fetchText?v=6";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export default async function (req: Request) {
);
const completion = await openai.chat.completions.create({
model: "gpt-5-nano",
response_format: {
const REALTIME_BASE_URL = "https://api.openai.com/v1/realtime";
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
if (!OPENAI_API_KEY) {
throw new Error("šŸ”“ OpenAI API key not configured");
}
export function makeHeaders(contentType?: string) {
const obj: Record<string, string> = {
Authorization: `Bearer ${OPENAI_API_KEY}`,
};
if (contentType) obj["Content-Type"] = contentType;
const VOICE = "marin";
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
if (!OPENAI_API_KEY) {
throw new Error("šŸ”“ OpenAI API key not configured");
}
export function makeHeaders(contentType?: string) {
const obj: Record<string, string> = {
Authorization: `Bearer ${OPENAI_API_KEY}`,
};
if (contentType) obj["Content-Type"] = contentType;
// Create the call.
const url = "https://api.openai.com/v1/realtime/calls";
const headers = makeHeaders();
const fd = new FormData();
# hello-realtime-video
Hello Realtime is a complete OpenAI Realtime application that supports WebRTC
users. You can access the app via WebRTC at
https://hello-realtime-video.val.run.
websocket interface.
If you remix the app, you'll just need to pop in your own OPENAI_API_KEY (from
platform.openai.com).
observer.post("/:callId", async (c) => {
const callId = c.req.param("callId");
const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`;
const ws = new WebSocket(url, { headers: makeHeaders() });
ws.on("open", () => {