Search

3,267 results found for โ€œopenaiโ€ (1504ms)

Code
3,172

# hello-realtime
**Hello Realtime** is a OpenAI Realtime app that supports both WebRTC and SIP
(telephone) users. You can access the app via WebRTC at
[hello-realtime.val.run](https://hello-realtime.val.run), or via SIP by calling 425-800-0042.
server-side websocket interface.
If you remix the app, you'll just need to pop in your own `OPENAI_API_KEY` (from
[platform.openai.com](https://platform.openai.com)), and if you want SIP, the `OPENAI_SIGNING_SE
## Architecture
- Browser connects to frontend
- creates WebRTC offer
- `/rtc` endpoint handles SDP negotiation with OpenAI
- observer established to monitor session
2. **SIP Flow**:
observer.post("/:callId", async (c) => {
const callId = c.req.param("callId");
const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`;
const ws = new WebSocket(url, { headers: makeHeaders() });
ws.on("open", () => {
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>OpenAI Realtime API Voice Agent</title>
<style>
:root {
// @ts-ignore
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
// --- AI BEHAVIORAL GUIDELINES ---
if (req.method === "POST") {
try {
const openai = new OpenAI();
const { image } = await req.json();
}
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
// @ts-ignore
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
// --- AI BEHAVIORAL GUIDELINES ---
if (req.method === "POST") {
try {
const openai = new OpenAI();
const { image } = await req.json();
}
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
console.log(`๐Ÿ“ก Starting observer for transcription call: ${callId}`);
// Connect to OpenAI WebSocket for this call
const wsUrl = `wss://api.openai.com/v1/realtime?call_id=${callId}`;
const ws = new WebSocket(wsUrl, {
headers: makeHeaders(),
# hello-transcription
Real-time speech transcription using OpenAI's Realtime API - a demonstration of transcription-on
## Features
## How It Works
This app uses OpenAI's Realtime API in transcription-only mode:
1. Your voice is captured via WebRTC
2. Audio is streamed to OpenAI's transcription service
3. Transcriptions are returned in real-time
4. No AI responses are generated (transcription only)
Set in your Val Town environment:
- `OPENAI_API_KEY` - Your OpenAI API key (required)
## Local Development
1. Fork/remix this val on Val Town
2. Add your `OPENAI_API_KEY` to Val Town secrets
3. Your app will be available at `https://[your-val-name].val.run`
## Technical Details
The app uses OpenAI's Realtime API in transcription mode:
- Session type: `transcription` (not `realtime`)
- Audio format: PCM16
## Credits
Built with OpenAI's Realtime API for transcription-only use cases.
# Hello-Transcription - OpenAI Realtime API Transcription Demo
## ๐ŸŽฏ Project Overview
Transcription demonstrates the transcription-only mode of OpenAI's Realtime API. Unlike the conv
**Created:** September 2, 2025
**Platform:** Val Town
**API:** OpenAI Realtime API (Transcription Mode)
**Key Feature:** Real-time streaming transcription with multiple model support
- **Runtime:** Deno (Val Town platform)
- **Framework:** Hono (lightweight web framework)
- **Transcription:** OpenAI Realtime API in transcription mode
- **Connection:** WebRTC with data channel for events
- **Frontend:** Vanilla JavaScript with split-view interface
1. **Audio Input**
```
User speaks โ†’ Microphone โ†’ WebRTC โ†’ OpenAI
```
```bash
# Create .env file
echo "OPENAI_API_KEY=sk-..." > .env
# Install Deno
**Solutions:**
- Check microphone permissions
- Verify OPENAI_API_KEY is set
- Check browser console for errors
- Ensure WebRTC connection established
2. **Set Environment**
- Add `OPENAI_API_KEY` in Val Town secrets
3. **Deploy**
### Environment Variables
- `OPENAI_API_KEY` - Required for OpenAI API access
## ๐Ÿ“ Future Enhancements
### Documentation
- [OpenAI Realtime Transcription Guide](https://platform.openai.com/docs/guides/realtime-transcr
- [Realtime API Reference](https://platform.openai.com/docs/api-reference/realtime)
- [Voice Activity Detection Guide](https://platform.openai.com/docs/guides/realtime-vad)
- [Val Town Documentation](https://docs.val.town)
## ๐ŸŽฏ Summary
fully demonstrates the transcription-only capabilities of OpenAI's Realtime API. Key achievement
1. **Pure Transcription**: No AI responses, focused solely on speech-to-text
# hello-mcp
**Hello MCP** is a remix of OpenAI's hello-realtime demo that adds Model Context Protocol (MCP)
This demo showcases:
- WebRTC-based voice conversations with OpenAI's Realtime API
- **MCP (Model Context Protocol)** for secure server-side tool execution
- A demo `getFavoriteFood` tool to demonstrate MCP functionality
- Toggle between standard and MCP-enabled modes
If you remix the app, you'll just need to pop in your own `OPENAI_API_KEY` (from
[platform.openai.com](https://platform.openai.com)).
## What's New: MCP Support
- Browser connects to frontend
- Creates WebRTC offer
- `/rtc` endpoint handles SDP negotiation with OpenAI
- Observer established to monitor session
2. **MCP-Enhanced Flow** (when enabled):
- Same WebRTC setup as above
- OpenAI discovers tools via MCP protocol (`/mcp`)
- Tools execute server-side when invoked
- Results returned through MCP protocol
// Create the call.
const url = "https://api.openai.com/v1/realtime/calls";
const headers = makeHeaders();
const fd = new FormData();
โ€ฆ
28
โ€ฆ
Next