Search
Code3,285
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
import { Hono } from "https://esm.sh/hono@3.11.7";import { blob } from "https://esm.town/v/std/blob";import { OpenAI } from "https://esm.town/v/std/openai";import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";import { const app = new Hono();const openai = new OpenAI();// Get all voice notes (for admin/dashboard)async function transcribeAudio(voiceNoteId: string, audioBuffer: ArrayBuffer) { try { // Convert ArrayBuffer to File for OpenAI const audioFile = new File([audioBuffer], "audio.webm", { type: "audio/webm" }); const transcription = await openai.audio.transcriptions.create({ file: audioFile, model: "whisper-1",
- 🎙️ Record voice notes directly in the browser- 🤖 AI-powered transcription using OpenAI Whisper- 🔗 Share voice notes via unique URLs- ⏰ Set expiration by max listens or date- **Database**: SQLite for voice note metadata- **Storage**: Val Town Blob storage for audio files- **AI**: OpenAI Whisper for transcription- **Frontend**: React with TypeScript- **Styling**: TailwindCSS
import { Hono } from "https://esm.sh/hono@3.11.7";import { blob } from "https://esm.town/v/std/blob";import { OpenAI } from "https://esm.town/v/std/openai";import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";import { const app = new Hono();const openai = new OpenAI();// Get all voice notes (for admin/dashboard)async function transcribeAudio(voiceNoteId: string, audioBuffer: ArrayBuffer) { try { // Convert ArrayBuffer to File for OpenAI const audioFile = new File([audioBuffer], "audio.webm", { type: "audio/webm" }); const transcription = await openai.audio.transcriptions.create({ file: audioFile, model: "whisper-1",
- 🎙️ Record voice notes directly in the browser- 🤖 AI-powered transcription using OpenAI Whisper- 🔗 Share voice notes via unique URLs- ⏰ Set expiration by max listens or date- **Database**: SQLite for voice note metadata- **Storage**: Val Town Blob storage for audio files- **AI**: OpenAI Whisper for transcription- **Frontend**: React with TypeScript- **Styling**: TailwindCSS
import { Hono } from "https://esm.sh/hono@3.11.7";import { blob } from "https://esm.town/v/std/blob";import { OpenAI } from "https://esm.town/v/std/openai";import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";import { const app = new Hono();const openai = new OpenAI();// Get all voice notes (for admin/dashboard)async function transcribeAudio(voiceNoteId: string, audioBuffer: ArrayBuffer) { try { // Convert ArrayBuffer to File for OpenAI const audioFile = new File([audioBuffer], "audio.webm", { type: "audio/webm" }); const transcription = await openai.audio.transcriptions.create({ file: audioFile, model: "whisper-1",
- 🎙️ Record voice notes directly in the browser- 🤖 AI-powered transcription using OpenAI Whisper- 🔗 Share voice notes via unique URLs- ⏰ Set expiration by max listens or date- **Database**: SQLite for voice note metadata- **Storage**: Val Town Blob storage for audio files- **AI**: OpenAI Whisper for transcription- **Frontend**: React with TypeScript- **Styling**: TailwindCSS
export async function projectIdea(topic = "الصحة") { const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer ${process.env.OPENAI_API_KEY}`, }, body: JSON.stringify({
import { Hono } from "https://esm.sh/hono@3.11.7";import { blob } from "https://esm.town/v/std/blob";import { OpenAI } from "https://esm.town/v/std/openai";import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";import { const app = new Hono();const openai = new OpenAI();// Get all voice notes (for admin/dashboard)async function transcribeAudio(voiceNoteId: string, audioBuffer: ArrayBuffer) { try { // Convert ArrayBuffer to File for OpenAI const audioFile = new File([audioBuffer], "audio.webm", { type: "audio/webm" }); const transcription = await openai.audio.transcriptions.create({ file: audioFile, model: "whisper-1",
- 🎙️ Record voice notes directly in the browser- 🤖 AI-powered transcription using OpenAI Whisper- 🔗 Share voice notes via unique URLs- ⏰ Set expiration by max listens or date- **Database**: SQLite for voice note metadata- **Storage**: Val Town Blob storage for audio files- **AI**: OpenAI Whisper for transcription- **Frontend**: React with TypeScript- **Styling**: TailwindCSS
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found