Search
Code3,172
Connect to the Realtime API using WebSockets on a server.[WebSockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) are a broadly supported API for realtime data transfer, and a great choice for connecting to the OpenAI Realtime API in server-to-server applications. For browser and mobile clients, we recommend connecting via [WebRTC](/docs/guides/realtime-webrtc).In a server-to-server integration with Realtime, your backend system will connect via WebSocket directly to the Realtime API. You can use a [standard API key](/settings/organization/api-keys) to authenticate this connection, since the token will only be available on your secure backend server.Connect via WebSocket---------------------Below are several examples of connecting via WebSocket to the Realtime API. In addition to using the WebSocket URL below, you will also need to pass an authentication header using your OpenAI API key.It is possible to use WebSocket in browsers with an ephemeral API token as shown in the [WebRTC connection guide](/docs/guides/realtime-webrtc), but if you are connecting from a client like a browser or mobile app, WebRTC will be a more robust solution in most cases.import WebSocket from "ws";const url = "wss://api.openai.com/v1/realtime?model=gpt-realtime";const ws = new WebSocket(url, { headers: { Authorization: "Bearer " + process.env.OPENAI_API_KEY, },});import websocketOPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")url = "wss://api.openai.com/v1/realtime?model=gpt-realtime"headers = ["Authorization: Bearer " + OPENAI_API_KEY]def on_open(ws):const ws = new WebSocket( "wss://api.openai.com/v1/realtime?model=gpt-realtime", [ "realtime", // Auth "openai-insecure-api-key." + OPENAI_API_KEY, // Optional "openai-organization." + OPENAI_ORG_ID, "openai-project." + OPENAI_PROJECT_ID, ]);import WebSocket from "ws";const url = "wss://api.openai.com/v1/realtime?model=gpt-realtime";const ws = new WebSocket(url, { headers: { Authorization: "Bearer " + process.env.OPENAI_API_KEY, },});
Connect to the Realtime API using WebRTC.[WebRTC](https://webrtc.org/) is a powerful set of standard interfaces for building real-time applications. The OpenAI Realtime API supports connecting to realtime models through a WebRTC peer connection.For browser-based speech-to-speech voice applications, we recommend starting with the [Agents SDK for TypeScript](https://openai.github.io/openai-agents-js/guides/voice-agents/quickstart/), which provides higher-level helpers and APIs for managing Realtime sessions. The WebRTC interface is powerful and flexible, but lower level than the Agents SDK.When connecting to a Realtime model from the client (like a web browser or mobile device), we recommend using WebRTC rather than WebSocket for more consistent performance.1. A browser makes a request to a developer-controlled server to mint an ephemeral API key.2. The developer's server uses a [standard API key](/settings/organization/api-keys) to request an ephemeral key from the [OpenAI REST API](/docs/api-reference/realtime-sessions), and returns that new key to the browser.3. The browser uses the ephemeral key to authenticate a session directly with the OpenAI Realtime API as a [WebRTC peer connection](https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection).Initializing a peer connection```javascript// Get a session token for OpenAI Realtime APIconst tokenResponse = await fetch("/token");const data = await tokenResponse.json();await pc.setLocalDescription(offer);const baseUrl = "https://api.openai.com/v1/realtime/calls";const model = "gpt-realtime";const sdpResponse = await fetch(`${baseUrl}?model=${model}`, {---------------------------To create an ephemeral token to use on the client-side, you will need to build a small server-side application (or integrate with an existing one) to make an [OpenAI REST API](/docs/api-reference/realtime-sessions) request for an ephemeral key. You will use a [standard API key](/settings/organization/api-keys) to authenticate this request on your backend server.Below is an example of a simple Node.js [express](https://expressjs.com/) server which mints an ephemeral API key using the REST API: try { const response = await fetch( "https://api.openai.com/v1/realtime/client_secrets", { method: "POST",```You can create a server endpoint like this one on any platform that can send and receive HTTP requests. Just ensure that **you only use standard OpenAI API keys on the server, not in the browser.**Sending and receiving eventsCheck out the WebRTC Realtime API in this light weight example app./Users/emcho/devel/openai-realtime-consoleWas this page useful?
In addition to tools you make available to the model with [function calling](/docs/guides/function-calling), you can give models new capabilities using **connectors** and **remote MCP servers**. These tools give the model the ability to connect to and control external services when needed to respond to a user's prompt. These tool calls can either be allowed automatically, or restricted with explicit approval required by you as the developer.* **Connectors** are OpenAI-maintained MCP wrappers for popular services like Google Workspace or Dropbox, like the connectors available in [ChatGPT](https://chatgpt.com).* **Remote MCP servers** can be any server on the public Internet that implements a remote [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) server.```bashcurl https://api.openai.com/v1/responses \-H "Content-Type: application/json" \-H "Authorization: Bearer $OPENAI_API_KEY" \-d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(```bashcurl https://api.openai.com/v1/responses \-H "Content-Type: application/json" \-H "Authorization: Bearer $OPENAI_API_KEY" \-d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(```bashcurl https://api.openai.com/v1/responses \-H "Content-Type: application/json" \-H "Authorization: Bearer $OPENAI_API_KEY" \-d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(#### ApprovalsBy default, OpenAI will request your approval before any data is shared with a connector or remote MCP server. Approvals help you maintain control and visibility over what data is being sent to an MCP server. We highly recommend that you carefully review (and optionally log) all data being shared with a remote MCP server. A request for an approval to make an MCP tool call creates a `mcp_approval_request` item in the Response's output that looks like this:```json```bashcurl https://api.openai.com/v1/responses \-H "Content-Type: application/json" \-H "Authorization: Bearer $OPENAI_API_KEY" \-d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(```bashcurl https://api.openai.com/v1/responses \-H "Content-Type: application/json" \-H "Authorization: Bearer $OPENAI_API_KEY" \-d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(```bashcurl https://api.openai.com/v1/responses \-H "Content-Type: application/json" \-H "Authorization: Bearer $OPENAI_API_KEY" \-d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(The Responses API has built-in support for a limited set of connectors to third-party services. These connectors let you pull in context from popular applications, like Dropbox and Gmail, to allow the model to interact with popular services.Connectors can be used in the same way as remote MCP servers. Both let an OpenAI model access additional third-party tools in an API request. However, instead of passing a `server_url` as you would to call a remote MCP server, you pass a `connector_id` which uniquely identifies a connector available in the API.### Available connectors```bashcurl https://api.openai.com/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-5",```javascriptimport OpenAI from "openai";const client = new OpenAI();const resp = await client.responses.create({```pythonfrom openai import OpenAIclient = OpenAI()resp = client.responses.create(----------------The MCP tool permits you to connect OpenAI models to external services. This is a powerful feature that comes with some risks.For connectors, there is a risk of potentially sending sensitive data to OpenAI, or allowing models read access to potentially sensitive data in those services.Remote MCP servers carry those same risks, but also have not been verified by OpenAI. These servers can allow models to access, send, and receive data, and take action in these services. All MCP servers are third-party services that are subject to their own terms and conditions.If you come across a malicious MCP server, please report it to `security@openai.com`.Below are some best practices to consider when integrating connectors and remote MCP servers.We also recommend logging any data sent to MCP servers. If you're using the Responses API with `store=true`, these data are already logged via the API for 30 days unless Zero Data Retention is enabled for your organization. You may also want to log these data in your own systems and perform periodic reviews on this to ensure data is being shared per your expectations.Malicious MCP servers may include hidden instructions (prompt injections) designed to make OpenAI models behave unexpectedly. While OpenAI has implemented built-in safeguards to help detect and block these threats, it's essential to carefully review inputs and outputs, and ensure connections are established only with trusted servers.MCP servers may update tool behavior unexpectedly, potentially leading to unintended or malicious behavior.The MCP tool is compatible with Zero Data Retention and Data Residency, but it's important to note that MCP servers are third-party services, and data sent to an MCP server is subject to their data retention and data residency policies.In other words, if you're an organization with Data Residency in Europe, OpenAI will limit inference and storage of Customer Content to take place in Europe up until the point communication or data is sent to the MCP server. It is your responsibility to ensure that the MCP server also adheres to any Zero Data Retention or Data Residency requirements you may have. Learn more about Zero Data Retention and Data Residency [here](/docs/guides/your-data).Usage notes
const VOICE = "marin";const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");if (!OPENAI_API_KEY) { throw new Error("🔴 OpenAI API key not configured");}export function makeHeaders(contentType?: string) { const obj: Record<string, string> = { Authorization: `Bearer ${OPENAI_API_KEY}`, }; if (contentType) obj["Content-Type"] = contentType;
const VOICE = "marin";const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");if (!OPENAI_API_KEY) { throw new Error("🔴 OpenAI API key not configured");}export function makeHeaders(contentType?: string) { const obj: Record<string, string> = { Authorization: `Bearer ${OPENAI_API_KEY}`, }; if (contentType) obj["Content-Type"] = contentType;
# hello-realtime-videoHello Realtime is a complete OpenAI Realtime application that supports WebRTCusers. You can access the app via WebRTC athttps://hello-realtime-video.val.run.websocket interface.If you remix the app, you'll just need to pop in your own OPENAI_API_KEY (fromplatform.openai.com).
observer.post("/:callId", async (c) => { const callId = c.req.param("callId"); const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`; const ws = new WebSocket(url, { headers: makeHeaders() }); ws.on("open", () => {
const MODEL = "gpt-realtime";const INSTRUCTIONS = ` Greet the user in English, and thank them for trying the new OpenAI Realtime API. Give them a brief summary based on the list below, and then ask if they have any questions. Answer questions using the information below. For questions outside this scope, - higher audio quality - improved handling of alphanumerics (eg, properly understanding credit card and phone numbers) - support for the OpenAI Prompts API - support for MCP-based tools - auto-truncation to reduce context sizeconst VOICE = "marin";const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");if (!OPENAI_API_KEY) { throw new Error("🔴 OpenAI API key not configured");}export function makeHeaders(contentType?: string) { const obj: Record<string, string> = { Authorization: `Bearer ${OPENAI_API_KEY}`, }; if (contentType) obj["Content-Type"] = contentType;
sip.post("/", async (c) => { // Verify the webhook. const OPENAI_SIGNING_SECRET = Deno.env.get("OPENAI_SIGNING_SECRET"); if (!OPENAI_SIGNING_SECRET) { console.error("🔴 webhook secret not configured"); return c.text("Internal error", 500); } const webhook = new Webhook(OPENAI_SIGNING_SECRET); const bodyStr = await c.req.text(); let callId: string | undefined; // Accept the call. const url = `https://api.openai.com/v1/realtime/calls/${callId}/accept`; const headers = makeHeaders("application/json"); const body = JSON.stringify(makeSession());
rtc.post("/", async (c) => { // Create the call. const url = "https://api.openai.com/v1/realtime/calls"; const headers = makeHeaders(); const fd = new FormData();
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found