Search

3,267 results found for openai (2521ms)

Code
3,172

Connect to the Realtime API using WebSockets on a server.
e data transfer, and a great choice for connecting to the OpenAI Realtime API in server-to-serve
In a server-to-server integration with Realtime, your backend system will connect via WebSocket
![connect directly to realtime API](https://openaidevs.retool.com/api/file/464d4334-c467-4862-90
Connect via WebSocket
---------------------
ill also need to pass an authentication header using your OpenAI API key.
It is possible to use WebSocket in browsers with an ephemeral API token as shown in the [WebRTC
import WebSocket from "ws";
const url = "wss://api.openai.com/v1/realtime?model=gpt-realtime";
const ws = new WebSocket(url, {
headers: {
Authorization: "Bearer " + process.env.OPENAI_API_KEY,
},
});
import websocket
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
url = "wss://api.openai.com/v1/realtime?model=gpt-realtime"
headers = ["Authorization: Bearer " + OPENAI_API_KEY]
def on_open(ws):
const ws = new WebSocket(
"wss://api.openai.com/v1/realtime?model=gpt-realtime",
[
"realtime",
// Auth
"openai-insecure-api-key." + OPENAI_API_KEY,
// Optional
"openai-organization." + OPENAI_ORG_ID,
"openai-project." + OPENAI_PROJECT_ID,
]
);
import WebSocket from "ws";
const url = "wss://api.openai.com/v1/realtime?model=gpt-realtime";
const ws = new WebSocket(url, {
headers: {
Authorization: "Bearer " + process.env.OPENAI_API_KEY,
},
});
Connect to the Realtime API using WebRTC.
ndard interfaces for building real-time applications. The OpenAI Realtime API supports connectin
end starting with the [Agents SDK for TypeScript](https://openai.github.io/openai-agents-js/guid
When connecting to a Realtime model from the client (like a web browser or mobile device), we re
1. A browser makes a request to a developer-controlled server to mint an ephemeral API key.
anization/api-keys) to request an ephemeral key from the [OpenAI REST API](/docs/api-reference/r
ephemeral key to authenticate a session directly with the OpenAI Realtime API as a [WebRTC peer
![connect to realtime via WebRTC](https://openaidevs.retool.com/api/file/55b47800-9aaf-48b9-90d5
Initializing a peer connection
```javascript
// Get a session token for OpenAI Realtime API
const tokenResponse = await fetch("/token");
const data = await tokenResponse.json();
await pc.setLocalDescription(offer);
const baseUrl = "https://api.openai.com/v1/realtime/calls";
const model = "gpt-realtime";
const sdpResponse = await fetch(`${baseUrl}?model=${model}`, {
---------------------------
plication (or integrate with an existing one) to make an [OpenAI REST API](/docs/api-reference/r
Below is an example of a simple Node.js [express](https://expressjs.com/) server which mints an
try {
const response = await fetch(
"https://api.openai.com/v1/realtime/client_secrets",
{
method: "POST",
```
e HTTP requests. Just ensure that **you only use standard OpenAI API keys on the server, not in
Sending and receiving events
Check out the WebRTC Realtime API in this light weight example app.
/Users/emcho/devel/openai-realtime-console
Was this page useful?
In addition to tools you make available to the model with [function calling](/docs/guides/functi
* **Connectors** are OpenAI-maintained MCP wrappers for popular services like Google Workspace
* **Remote MCP servers** can be any server on the public Internet that implements a remote [Mo
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
#### Approvals
By default, OpenAI will request your approval before any data is shared with a connector or remo
```json
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
The Responses API has built-in support for a limited set of connectors to third-party services.
e used in the same way as remote MCP servers. Both let an OpenAI model access additional third-p
### Available connectors
```bash
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
```javascript
import OpenAI from "openai";
const client = new OpenAI();
const resp = await client.responses.create({
```python
from openai import OpenAI
client = OpenAI()
resp = client.responses.create(
----------------
The MCP tool permits you to connect OpenAI models to external services. This is a powerful featu
there is a risk of potentially sending sensitive data to OpenAI, or allowing models read access
arry those same risks, but also have not been verified by OpenAI. These servers can allow models
If you come across a malicious MCP server, please report it to `security@openai.com`.
Below are some best practices to consider when integrating connectors and remote MCP servers.
We also recommend logging any data sent to MCP servers. If you're using the Responses API with `
hidden instructions (prompt injections) designed to make OpenAI models behave unexpectedly. Whi
MCP servers may update tool behavior unexpectedly, potentially leading to unintended or maliciou
The MCP tool is compatible with Zero Data Retention and Data Residency, but it's important to no
if you're an organization with Data Residency in Europe, OpenAI will limit inference and storag
Usage notes
const VOICE = "marin";
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
if (!OPENAI_API_KEY) {
throw new Error("🔴 OpenAI API key not configured");
}
export function makeHeaders(contentType?: string) {
const obj: Record<string, string> = {
Authorization: `Bearer ${OPENAI_API_KEY}`,
};
if (contentType) obj["Content-Type"] = contentType;
const VOICE = "marin";
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
if (!OPENAI_API_KEY) {
throw new Error("🔴 OpenAI API key not configured");
}
export function makeHeaders(contentType?: string) {
const obj: Record<string, string> = {
Authorization: `Bearer ${OPENAI_API_KEY}`,
};
if (contentType) obj["Content-Type"] = contentType;
# hello-realtime-video
Hello Realtime is a complete OpenAI Realtime application that supports WebRTC
users. You can access the app via WebRTC at
https://hello-realtime-video.val.run.
websocket interface.
If you remix the app, you'll just need to pop in your own OPENAI_API_KEY (from
platform.openai.com).
observer.post("/:callId", async (c) => {
const callId = c.req.param("callId");
const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`;
const ws = new WebSocket(url, { headers: makeHeaders() });
ws.on("open", () => {
const MODEL = "gpt-realtime";
const INSTRUCTIONS = `
Greet the user in English, and thank them for trying the new OpenAI Realtime API.
Give them a brief summary based on the list below, and then ask if they have any questions.
Answer questions using the information below. For questions outside this scope,
- higher audio quality
- improved handling of alphanumerics (eg, properly understanding credit card and phone numbers
- support for the OpenAI Prompts API
- support for MCP-based tools
- auto-truncation to reduce context size
const VOICE = "marin";
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
if (!OPENAI_API_KEY) {
throw new Error("🔴 OpenAI API key not configured");
}
export function makeHeaders(contentType?: string) {
const obj: Record<string, string> = {
Authorization: `Bearer ${OPENAI_API_KEY}`,
};
if (contentType) obj["Content-Type"] = contentType;
sip.post("/", async (c) => {
// Verify the webhook.
const OPENAI_SIGNING_SECRET = Deno.env.get("OPENAI_SIGNING_SECRET");
if (!OPENAI_SIGNING_SECRET) {
console.error("🔴 webhook secret not configured");
return c.text("Internal error", 500);
}
const webhook = new Webhook(OPENAI_SIGNING_SECRET);
const bodyStr = await c.req.text();
let callId: string | undefined;
// Accept the call.
const url = `https://api.openai.com/v1/realtime/calls/${callId}/accept`;
const headers = makeHeaders("application/json");
const body = JSON.stringify(makeSession());
rtc.post("/", async (c) => {
// Create the call.
const url = "https://api.openai.com/v1/realtime/calls";
const headers = makeHeaders();
const fd = new FormData();