Search
Code3,172
# hello-realtime**Hello Realtime** is a OpenAI Realtime app that supports both WebRTC and SIP(telephone) users. You can access the app via WebRTC at[hello-realtime.val.run](https://hello-realtime.val.run), or via SIP by calling 425-800-0042.server-side websocket interface.If you remix the app, you'll just need to pop in your own `OPENAI_API_KEY` (from[platform.openai.com](https://platform.openai.com)), and if you want SIP, the `OPENAI_SIGNING_SECRET`.## Architecture - Browser connects to frontend - creates WebRTC offer - `/rtc` endpoint handles SDP negotiation with OpenAI - observer established to monitor session2. **SIP Flow**:
observer.post("/:callId", async (c) => { const callId = c.req.param("callId"); const url = `wss://api.openai.com/v1/realtime?call_id=${callId}`; const ws = new WebSocket(url, { headers: makeHeaders() }); ws.on("open", () => {
<meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>OpenAI Realtime API Voice Agent</title> <style> :root {
// @ts-ignoreimport { OpenAI } from "https://esm.town/v/std/openai?v=4";// --- AI BEHAVIORAL GUIDELINES --- if (req.method === "POST") { try { const openai = new OpenAI(); const { image } = await req.json(); } const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [
// @ts-ignoreimport { OpenAI } from "https://esm.town/v/std/openai?v=4";// --- AI BEHAVIORAL GUIDELINES --- if (req.method === "POST") { try { const openai = new OpenAI(); const { image } = await req.json(); } const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [
console.log(`๐ก Starting observer for transcription call: ${callId}`); // Connect to OpenAI WebSocket for this call const wsUrl = `wss://api.openai.com/v1/realtime?call_id=${callId}`; const ws = new WebSocket(wsUrl, { headers: makeHeaders(),
# hello-transcriptionReal-time speech transcription using OpenAI's Realtime API - a demonstration of transcription-only mode without AI responses.## Features## How It WorksThis app uses OpenAI's Realtime API in transcription-only mode:1. Your voice is captured via WebRTC2. Audio is streamed to OpenAI's transcription service3. Transcriptions are returned in real-time4. No AI responses are generated (transcription only)Set in your Val Town environment:- `OPENAI_API_KEY` - Your OpenAI API key (required)## Local Development1. Fork/remix this val on Val Town2. Add your `OPENAI_API_KEY` to Val Town secrets3. Your app will be available at `https://[your-val-name].val.run`## Technical DetailsThe app uses OpenAI's Realtime API in transcription mode:- Session type: `transcription` (not `realtime`)- Audio format: PCM16## CreditsBuilt with OpenAI's Realtime API for transcription-only use cases.
# Hello-Transcription - OpenAI Realtime API Transcription Demo## ๐ฏ Project OverviewHello-Transcription demonstrates the transcription-only mode of OpenAI's Realtime API. Unlike the conversational mode, this implementation focuses purely on speech-to-text conversion without generating AI responses, making it ideal for subtitles, live captions, meeting transcriptions, and other transcription-focused use cases.**Created:** September 2, 2025 **Platform:** Val Town **API:** OpenAI Realtime API (Transcription Mode) **Key Feature:** Real-time streaming transcription with multiple model support- **Runtime:** Deno (Val Town platform)- **Framework:** Hono (lightweight web framework)- **Transcription:** OpenAI Realtime API in transcription mode- **Connection:** WebRTC with data channel for events- **Frontend:** Vanilla JavaScript with split-view interface1. **Audio Input** ``` User speaks โ Microphone โ WebRTC โ OpenAI ``` ```bash # Create .env file echo "OPENAI_API_KEY=sk-..." > .env # Install Deno**Solutions:**- Check microphone permissions- Verify OPENAI_API_KEY is set- Check browser console for errors- Ensure WebRTC connection established2. **Set Environment** - Add `OPENAI_API_KEY` in Val Town secrets3. **Deploy**### Environment Variables- `OPENAI_API_KEY` - Required for OpenAI API access## ๐ Future Enhancements### Documentation- [OpenAI Realtime Transcription Guide](https://platform.openai.com/docs/guides/realtime-transcription)- [Realtime API Reference](https://platform.openai.com/docs/api-reference/realtime)- [Voice Activity Detection Guide](https://platform.openai.com/docs/guides/realtime-vad)- [Val Town Documentation](https://docs.val.town)## ๐ฏ SummaryHello-Transcription successfully demonstrates the transcription-only capabilities of OpenAI's Realtime API. Key achievements:1. **Pure Transcription**: No AI responses, focused solely on speech-to-text
# hello-mcp**Hello MCP** is a remix of OpenAI's hello-realtime demo that adds Model Context Protocol (MCP) support for server-side tool execution.This demo showcases:- WebRTC-based voice conversations with OpenAI's Realtime API- **MCP (Model Context Protocol)** for secure server-side tool execution- A demo `getFavoriteFood` tool to demonstrate MCP functionality- Toggle between standard and MCP-enabled modesIf you remix the app, you'll just need to pop in your own `OPENAI_API_KEY` (from[platform.openai.com](https://platform.openai.com)).## What's New: MCP Support - Browser connects to frontend - Creates WebRTC offer - `/rtc` endpoint handles SDP negotiation with OpenAI - Observer established to monitor session2. **MCP-Enhanced Flow** (when enabled): - Same WebRTC setup as above - OpenAI discovers tools via MCP protocol (`/mcp`) - Tools execute server-side when invoked - Results returned through MCP protocol
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found