Search

3,310 results found for β€œopenai” (1836ms)

Code
3,215

import { OpenAI } from "https://esm.town/v/std/openai";
// --- TYPE DEFINITIONS ---
}
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const url = new URL(req.url);
const CORS_HEADERS = {
case "synthesizeProject": {
const synthesisContent = `Current Date: ${new Date().toISOString().split("T")[0]}\n\nG
const completion = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: PROJECT_SYNTHESIS_PROMPT }, {
JSON.stringify(body.tasks, null, 2)
}`;
const completion = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: DAILY_REBALANCE_PROMPT }, {
conversation.unshift(contextMessage);
}
const completion = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: CHAT_PROMPT }, ...conversation],
import process from "node:process";
import { marked } from "npm:marked";
import { OpenAI } from "npm:openai";
function pm(...lines: string[]): string {
);
const client = new OpenAI({ apiKey: process.env.PERPLEXITY_API_KEY, baseURL: "https://api.perp
const response = await client.chat.completions.create({
model: "sonar",
1. **Examine `/backend/services/pdf.processing.service.ts` completely**
- Confirm it contains both `processPdfWithOpenAI()` and `processEmailContentWithOpenAI()` fu
- Note the exported types: `PdfExtractionResult` and `EmailExtractionResult`
- Verify the file's primary context is email processing (both content and attachments)
3. **Update import in `/backend/emails/transaction.attachments.email.ts`**
- Change: `import { processPdfWithOpenAI, processEmailContentWithOpenAI } from "../services/
- To: `import { processPdfWithOpenAI, processEmailContentWithOpenAI } from "../services/emai
4. **Delete `/backend/services/pdf.processing.service.ts`**
- \[ \] New file `/backend/services/email.processing.service.ts` created with updated header
- \[ \] Functions `processPdfWithOpenAI` and `processEmailContentWithOpenAI` remain identical
- \[ \] Types `PdfExtractionResult` and `EmailExtractionResult` remain identical
- \[ \] Import updated in `/backend/services/notion.transactions.service.ts`
* Read `/backend/emails/transaction.attachments.email.ts` to understand current PDF processing
* Note that it uses `processPdfWithOpenAI()` which returns `PdfExtractionResult` with structur
* Identify the data being extracted: amount, date, vendor, description, type, confidence
1. Add import: `import { createNotionAttachment } from "../services/notion.transactions.service
2. In the PDF processing loop, after each `processPdfWithOpenAI()` call, add:
Instructions for Implementing OpenAI PDF Processing in Val Town
===============================================================
-------
User wants to process PDF attachments from emails using OpenAI to extract structured financial d
Critical Requirements
* **Step 1**: Extract text from PDF locally using a PDF parsing library
* **Step 2**: Send extracted text to OpenAI chat completions (NOT Files API)
* **Do NOT** use OpenAI Files API, file attachments, or file\_search tools
* **Do NOT** overcomplicate with file uploads or special OpenAI endpoints
### 2\. PDF Text Extraction Library Selection
* **Handle errors gracefully** with multiple fallback approaches
### 3\. OpenAI Integration
* Use standard `openai` package from esm.sh: `https://esm.sh/openai@4.67.3`
* Use regular chat completions endpoint
* Include extracted text directly in the prompt
* Create `/backend/services/pdf.processing.service.ts`
* Export `processPdfWithOpenAI(file: File)` function
* Update existing email trigger to import and use the service
* Update documentation in README files
// /backend/services/pdf.processing.service.ts
import { OpenAI } from "https://esm.sh/openai@4.67.3";
// Define interface for results
processingTime: number;
textLength?: number;
openaiTokens?: number;
};
}
}
// OpenAI analysis function
async function analyzeTextWithOpenAI(openai: OpenAI, text: string): Promise<any> {
// Use detailed extraction prompt
// Send to chat completions (NOT Files API)
// Main export function
export async function processPdfWithOpenAI(file: File): Promise<PdfExtractionResult> {
// Validate file (PDF, size limits)
// Extract text locally
// Initialize OpenAI client
// Analyze text with OpenAI
// Parse and return structured results
// Comprehensive error handling and logging
// /backend/emails/transaction.attachments.email.ts
import { processPdfWithOpenAI } from "../services/pdf.processing.service.ts";
// In the main function, after detecting PDFs:
for (const pdfFile of pdfFiles) {
const result = await processPdfWithOpenAI(pdfFile);
// Log results and handle errors
}
* Update services README with new service
* Update email trigger README with processing capability
* Add OPENAI\_API\_KEY to environment variables section
Critical Implementation Details
### OpenAI Prompt
const extractionPrompt = `
### Common Pitfalls to Avoid
1. **Don't use OpenAI Files API** \- Use regular chat completions
2. **Don't use file attachments** \- Send text directly in prompt
3. **Import libraries dynamically** \- Avoid top-level imports that cause Node.js issues
### Environment Variables
* Ensure `OPENAI_API_KEY` is set
* Document in README requirements section
* Log extracted text preview (first 200 chars)
* Log text length for verification
* Log OpenAI response and token usage
* Log processing time and success/failure rates
---------
xtracting financial data and "view in browser" URLs using OpenAI, then saving to the same Notion
Key Implementation Strategy
---------------------------
* **Use OpenAI for ALL extraction** (financial data + URLs) - do NOT use regex patterns
* **Maintain existing PDF processing** \- only add email content processing as a new branch
* **Use same Notion database schema** \- save URLs as external links in "Files & media" proper
type: 'receipt' | 'invoice' | null;
confidence: 'high' | 'medium' | 'low';
viewUrl: string | null; // NEW: URL extracted by OpenAI
};
error?: string;
contentLength: number;
processingTime: number;
openaiTokens?: number;
};
}
**File**: `/backend/services/pdf.processing.service.ts`
Add new function that processes email content with OpenAI:
async function analyzeEmailContentWithOpenAI(openai: OpenAI, content: string): Promise<any>
const extractionPrompt = `
You are an expert financial document analyzer. I will provide you with email content that co
`;
return await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: extractionPrompt }],
}
export async function processEmailContentWithOpenAI(content: string): Promise<EmailExtractio
// Implementation similar to processPdfWithOpenAI but for email content
// Include viewUrl in the returned data structure
}
Add import for new functions:
import { processPdfWithOpenAI, processEmailContentWithOpenAI } from "../services/pdf.process
import { createNotionAttachment, uploadPdfToNotionPage, createNotionEmailAttachment } from "
try {
// Process email content with OpenAI (extracts data + URL)
const emailProcessingResult = await processEmailContentWithOpenAI(emailContent);
if (emailProcessingResult.success && emailProcessingResult.data?.viewUrl) {
console.log('πŸ”— OpenAI extracted view URL:', emailProcessingResult.data.viewUrl);
}
// Save to Notion (URL comes from OpenAI result)
const notionResult = await createNotionEmailAttachment(
emailProcessingResult,
Key points to document:
* **AI-Powered URL Extraction**: OpenAI extracts URLs as part of data extraction
* **Dual Processing Modes**: PDF attachments vs email content
* **Same Database Schema**: Both modes use identical Notion fields
### DO:
* Let OpenAI handle all extraction (data + URLs) in one call
* Add email processing as a new branch, not a replacement
* Use the same field mapping for both PDF and email modes
1. Test existing PDF processing still works
2. Test email content processing with OpenAI
3. Verify URLs are correctly saved to Notion
4. Check error handling for various edge cases
* Emails with PDFs: Use existing PDF processing (unchanged)
* Emails without PDFs: Process email content with OpenAI, extract financial data + URLs, save
* Same comprehensive logging and error handling for both modes
* No breaking changes to existing functionality
* **No inference**: Do NOT assume or infer document type - only base on explicit keyword prese
### 3\. OpenAI Prompt Enhancement
Modify the `analyzeTextWithOpenAI` function's extraction prompt:
**Add new instruction (#5):**
* Do NOT modify the PDF text extraction logic
* Do NOT change the OpenAI client initialization
* Do NOT alter error handling patterns
* Do NOT modify the existing data fields (amount, date, vendor, description, confidence)
* βœ… Adds type field to TypeScript interface
* βœ… Updates OpenAI prompt with type detection instructions
* βœ… Includes type in JSON response format specification
* βœ… Adds type to console logging output
## 02-openai.md
"z-ai/glm-4.5-air:free",
// "morph/morph-v3-large",
// "openai/gpt-4o-mini",
// "anthropic/claude-3-haiku",
// "meta-llama/llama-3.1-8b-instruct",
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },