Search
Code3,203
import { Hono } from "npm:hono@4.4.12";import { streamSSE } from "npm:hono@4.4.12/streaming";import { OpenAI } from "https://esm.town/v/std/openai?v=4";import { blob } from "https://esm.town/v/std/blob?v=11";const app = new Hono();const openai = new OpenAI();async function getSim(run_id) { }); const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: messages,
import { getIssueContentAsMarkdown } from "../../api/index.tsx";import { GreenPTClient } from "https://esm.town/v/cricks_unmixed4u/openai-client@45-main/main.tsx";import { fetchIssuesUpdatedSince } from "../../shared/github-client.ts";
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
// @ts-ignoreimport { OpenAI } from "https://esm.town/v/std/openai?v=4";// For parsing HTML content from web requestsimport * as cheerio from "npm:cheerio@1.0.0-rc.12"; if (!prompt) throw new Error("Prompt is required."); const openai = new OpenAI(); const stream = new ReadableStream({ }); const workerResponseStream = await openai.chat.completions.create( { model: "gpt-4o", }); const verifierResponse = await openai.chat.completions.create({ model: "gpt-4o", messages: [ }); const formatterResponse = await openai.chat.completions.create({ model: "gpt-4o", messages: [
* Stage 2: Synthesize consensus across all individual analyses * * Uses actual OpenAI Chat Completions API with proper error handling */import { OpenAI } from "https://esm.town/v/std/openai";/*** Types ***/async function analyzeIndividualTranscript( openai: OpenAI, model: string, question: string, try { const completion = await openai.chat.completions.create({ model, messages: [ const content = completion.choices[0]?.message?.content; if (!content) { throw new Error("No response content from OpenAI"); }async function synthesizeConsensus( openai: OpenAI, model: string, question: string, try { const completion = await openai.chat.completions.create({ model, messages: [ const content = completion.choices[0]?.message?.content; if (!content) { throw new Error("No response content from OpenAI"); } // Validation if (!request?.apiKey) { return { success: false, error: "Missing OpenAI API key", timestamp }; } if (!request?.question?.trim()) { const config = { ...DEFAULTS, ...options }; const openai = new OpenAI({ apiKey: request.apiKey }); try { const analysis = await analyzeIndividualTranscript( openai, config.extractModel, request.question, console.log("๐ค Stage 2: Consensus synthesis..."); const consensus = await synthesizeConsensus( openai, config.consensusModel, request.question,
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.### OpenAI```tsimport { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" },
/** * Test endpoint for OpenAI consensus analysis * Tests the simplified two-stage approach with timeout protection */import { analyzeTranscriptsWithConsensus } from "../services/openai.service.ts";// Sample test data - small, controlled transcripts */function estimateCost(inputTokens: number, outputTokens: number): number { // GPT-5 pricing (approximate - check current OpenAI pricing) const INPUT_COST_PER_1K = 0.01; // $0.01 per 1K input tokens const OUTPUT_COST_PER_1K = 0.03; // $0.03 per 1K output tokensexport default async function(req: Request): Promise<Response> { const startTime = Date.now(); console.log("๐งช Starting OpenAI service test..."); try { // Get API key from environment const apiKey = Deno.env.get("OPENAI_API_KEY"); if (!apiKey) { return new Response(JSON.stringify({ success: false, error: "Missing OPENAI_API_KEY environment variable", timestamp: new Date().toISOString() }), {
You have a cron job that processes research questions against multipletranscripts using OpenAI, then updates a Notion report page. The originalimplementation had two problems:1. Used non-existent OpenAI API endpoints2. Processed all questions in memory then updated Notion once at the end (bad for 30+ questions)Rough notes: does the openai service first ask a question of all transcripts,generate an answer per question, and then look for consensus across all 10answers from all 10 transcripts? do not write code, just answer the question.Yes, exactly! The OpenAI service uses a two-stage approach:Stage 1 (Individual Analysis):Supporting quotes Rationale Confidence score Stage 2 (Consensus Synthesis):Takes all the individual analyses from Stage 1 Feeds them to a second OpenAIcall that acts as a "senior research synthesizer" This second call looks acrossall the individual answers to determine: Overall consensus level(high/medium/low) A direct answer that synthesizes all individual responses Keypoints most transcripts agree on Notable disagreements or outliers Overallconfidence score So if you have 10 transcripts, it makes 10 individual OpenAIcalls in Stage 1, then 1 synthesis call in Stage 2 (total of 11 API calls perquestion). The final answer is based on finding patterns and consensus across## GoalImplement a 2-stage OpenAI analysis with incremental Notion updates forreal-time progress visibility and fault tolerance.- **Controller** (`findings.controller.ts`): Business logic orchestration, environment variables, error handling- **Services** (`openai.service.ts`, `notion.service.ts`): Pure API calls with consistent response format- **Types** (`findings.types.ts`): Shared interfacesmultiple service calls.## Step 1: Rewrite OpenAI Service (`/backend/services/openai.service.ts`)**Replace entire file with 2-stage implementation:****Function**:`analyzeIndividualTranscript(openai, model, question, transcript, transcriptKey, maxChars)`**Implementation details:**- Use `openai.chat.completions.create()` with actual OpenAI client- Model: `gpt-4o-mini` (cost efficient)- System prompt: "You are a meticulous research analyst. Analyze the provided### Stage 2: Consensus Synthesis**Function**: `synthesizeConsensus(openai, model, question, individualAnalyses)`**Implementation details:**- Use `openai.chat.completions.create()` with actual OpenAI client- Model: `gpt-4o` (high quality reasoning)- System prompt: "You are a senior research synthesizer. You will receive1. Validate inputs (apiKey, question, transcripts)2. Create OpenAI client with timeout3. Stage 1: Process all transcripts in parallel (with limit) โ individual analyses**Controller responsibilities:**- Handle environment variables (`OPENAI_API_KEY`, database IDs)- Orchestrate multiple service calls- Implement business rules (incremental updates, error recovery)- Error recovery (continue processing other questions if one fails)- Progress tracking and logging- Data transformation between OpenAI and Notion formats## Step 4: Update Types (`/backend/types/findings.types.ts`)1. No syntax errors2. Handles "No report pages found" gracefully3. OpenAI service works with sample data4. Service response formats are consistent
import { OpenAI } from "https://esm.town/v/std/openai";const openai = new OpenAI();function parseMarkdown(text: string) {export async function summarizeEvents(events: string[]) { const resp = await openai.chat.completions.create({ model: "gpt-4.1", messages: [{
- **Blob Storage**: `import { blob } from "https://esm.town/v/std/blob";`- **SQLite**: `import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";`- **OpenAI**: `import { OpenAI } from "https://esm.town/v/std/openai";`- **Email**: `import { email } from "https://esm.town/v/std/email";`- **Utilities**: `import { parseProject, readFile, serveFile } from "https://esm.town/v/std/utils@85-main/index.ts";`
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
/**
* Practical Implementation of Collective Content Intelligence
* Bridging advanced AI with collaborative content creation
*/
exp
kwhinnery_openai
lost1991
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "*",
No docs found