Search

3,281 results found for openai (4326ms)

Code
3,186

You have a cron job that processes research questions against multiple
transcripts using OpenAI, then updates a Notion report page. The original
implementation had two problems:
1. Used non-existent OpenAI API endpoints
2. Processed all questions in memory then updated Notion once at the end (bad
for 30+ questions)
Rough notes: does the openai service first ask a question of all transcripts,
generate an answer per question, and then look for consensus across all 10
answers from all 10 transcripts? do not write code, just answer the question.
Yes, exactly! The OpenAI service uses a two-stage approach:
Stage 1 (Individual Analysis):
Supporting quotes Rationale Confidence score Stage 2 (Consensus Synthesis):
Takes all the individual analyses from Stage 1 Feeds them to a second OpenAI
call that acts as a "senior research synthesizer" This second call looks across
all the individual answers to determine: Overall consensus level
(high/medium/low) A direct answer that synthesizes all individual responses Key
points most transcripts agree on Notable disagreements or outliers Overall
confidence score So if you have 10 transcripts, it makes 10 individual OpenAI
calls in Stage 1, then 1 synthesis call in Stage 2 (total of 11 API calls per
question). The final answer is based on finding patterns and consensus across
## Goal
Implement a 2-stage OpenAI analysis with incremental Notion updates for
real-time progress visibility and fault tolerance.
- **Controller** (`findings.controller.ts`): Business logic orchestration,
environment variables, error handling
- **Services** (`openai.service.ts`, `notion.service.ts`): Pure API calls with
consistent response format
- **Types** (`findings.types.ts`): Shared interfaces
multiple service calls.
## Step 1: Rewrite OpenAI Service (`/backend/services/openai.service.ts`)
**Replace entire file with 2-stage implementation:**
**Function**:
`analyzeIndividualTranscript(openai, model, question, transcript, transcriptKey, maxChars)`
**Implementation details:**
- Use `openai.chat.completions.create()` with actual OpenAI client
- Model: `gpt-4o-mini` (cost efficient)
- System prompt: "You are a meticulous research analyst. Analyze the provided
### Stage 2: Consensus Synthesis
**Function**: `synthesizeConsensus(openai, model, question, individualAnalyses)`
**Implementation details:**
- Use `openai.chat.completions.create()` with actual OpenAI client
- Model: `gpt-4o` (high quality reasoning)
- System prompt: "You are a senior research synthesizer. You will receive
1. Validate inputs (apiKey, question, transcripts)
2. Create OpenAI client with timeout
3. Stage 1: Process all transcripts in parallel (with limit) → individual
analyses
**Controller responsibilities:**
- Handle environment variables (`OPENAI_API_KEY`, database IDs)
- Orchestrate multiple service calls
- Implement business rules (incremental updates, error recovery)
- Error recovery (continue processing other questions if one fails)
- Progress tracking and logging
- Data transformation between OpenAI and Notion formats
## Step 4: Update Types (`/backend/types/findings.types.ts`)
1. No syntax errors
2. Handles "No report pages found" gracefully
3. OpenAI service works with sample data
4. Service response formats are consistent
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
function parseMarkdown(text: string) {
export async function summarizeEvents(events: string[]) {
const resp = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [{
- **Blob Storage**: `import { blob } from "https://esm.town/v/std/blob";`
- **SQLite**: `import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";`
- **OpenAI**: `import { OpenAI } from "https://esm.town/v/std/openai";`
- **Email**: `import { email } from "https://esm.town/v/std/email";`
- **Utilities**: `import { parseProject, readFile, serveFile } from "https://esm.town/v/std/util
- `main.ts` - HTTP server and API endpoints
- `index.html` - UI
- `openai.ts` - AI content generation
- `database.ts` - Blob storage to store and process the .txt files
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export interface BlogPost {
export async function generateBlogPost(transcript: string): Promise<BlogPost> {
const completion = await openai.chat.completions.create({
messages: [{
role: "user",
export async function generateNewsletter(transcript: string): Promise<Newsletter> {
const completion = await openai.chat.completions.create({
messages: [{
role: "user",
import { serveFile } from "https://esm.town/v/std/utils@85-main/index.ts";
import { uploadTranscript, getTranscripts, deleteTranscript, clearAllTranscripts } from "./datab
import { generateBlogPost, generateNewsletter } from "./openai.ts";
const app = new Hono();
import { slack } from "./slack.ts";
import { Hono } from "npm:hono";
import { icp } from "./openai.ts";
const app = new Hono();
import { OpenAI } from "https://esm.town/v/std/openai";
import { z } from "npm:zod@3.23.8";
import { zodResponseFormat } from "npm:openai@5.12.2/helpers/zod";
const openai = new OpenAI();
const ICPResult = z.object({
}];
const resp = await openai.chat.completions.parse({
model: "gpt-5-mini",
messages,
};
import OpenAI from "npm:openai";
const client = new OpenAI({ apiKey: Deno.env.get("OPENAI_API_KEY") });
const extractFeaturedImage = (html: string) =>