Search

3,380 results found for openai (3128ms)

Code
3,285

```
├── backend/
│ └── index.ts # Main API server with OpenAI integration
├── frontend/
│ ├── index.html # Main application interface
## Technology Stack
- **Backend**: Hono + OpenAI API
- **Frontend**: React + TailwindCSS
- **AI**: GPT-4o-mini for concept explanations
import { Hono } from "https://esm.sh/hono@3.11.7";
import { OpenAI } from "https://esm.town/v/std/openai";
import { readFile, serveFile } from "https://esm.town/v/std/utils@85-main/index.ts";
import type { ConceptRequest, ConceptResponse, ErrorResponse } from "../shared/types.ts";
});
const openai = new OpenAI();
// Serve static files
Return only valid JSON without any markdown formatting.`;
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
model: "gpt-4o-mini",
const responseText = completion.choices[0]?.message?.content;
if (!responseText) {
throw new Error("No response from OpenAI");
}
conceptResponse = JSON.parse(responseText);
} catch (parseError) {
console.error("Failed to parse OpenAI response:", responseText);
throw new Error("Invalid response format from AI");
}
- **URL Content Extraction**: Fetches and parses HTML content from URLs
- **Text Summarization**: Uses OpenAI API to generate concise summaries
- **Error Handling**: Comprehensive error handling for various failure scenarios
- **Static File Serving**: Serves frontend assets and shared utilities
## Environment Variables
- `OPENAI_API_KEY` - Required for AI summarization
## Error Handling
The API handles various error scenarios:
- Invalid URLs or unreachable content
- OpenAI API failures
- Malformed requests
- Content too short to summarize
import { Hono } from "https://esm.sh/hono@3.11.7";
import { readFile, serveFile } from "https://esm.town/v/std/utils@85-main/index.ts";
import { OpenAI } from "https://esm.town/v/std/openai";
import type { AITextRequest, AITextResponse, MemeTemplate } from "../shared/types.ts";
try {
const request: AITextRequest = await c.req.json();
const openai = new OpenAI();
const prompt = `Generate funny meme text for a "${request.templateName}" meme template.
Return ONLY a JSON object with "topText" and "bottomText" fields. No other text.`;
const completion = await openai.chat.completions.create({
messages: [
{ role: "system", content: "You are a hilarious meme generator that creates viral-worthy
- **Backend**: Hono (TypeScript API framework)
- **Frontend**: React with TypeScript
- **AI**: OpenAI GPT for funny text generation
- **Styling**: TailwindCSS
- **Canvas**: HTML5 Canvas for meme generation
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
export default async function(req: Request) {
// --- Dynamic Imports ---
const { OpenAI } = await import("https://esm.town/v/std/openai"); // Updated import path
const { z } = await import("npm:zod"); // For input validation
// --- Helper Function: Call OpenAI API ---
async function callOpenAIForCrux(
openai: OpenAI, // Instance passed in
systemPrompt: string,
userMessage: string,
): Promise<object | ErrorResponse> { // Returns parsed JSON object or an ErrorResponse
try {
const response = await openai.chat.completions.create({
model: "gpt-4o", // Or your preferred model
messages: [{ role: "system", content: systemPrompt }, { role: "user", content: userMessa
return JSON.parse(content) as CruxAnalysisResponse; // Assume it's the correct type
} catch (parseError) {
console.error("OpenAI JSON Parse Error:", parseError, "Raw Content:", content);
return { error: `AI response was not valid JSON. Raw: ${content.substring(0, 200)}...` }
}
} catch (error) {
console.error("OpenAI API call failed:", error);
return { error: "Error communicating with AI model.", details: error.message };
}
userInstruction: string,
): Promise<object | ErrorResponse> {
const openai = new OpenAI(); // Initialize with key
console.log(`Analyzing instruction: "${userInstruction}"`);
const result = await callOpenAIForCrux(openai, cruxSystemPrompt, userInstruction);
// Basic validation of the result structure (can be enhanced with Zod on server side too)
if ("error" in result) {
}
if (!result || typeof result !== "object" || !("original_instruction" in result) || !("crux_
console.error("Invalid structure from OpenAI:", result);
return { error: "AI returned an unexpected data structure.", details: result };
}
return new Response(JSON.stringify(cruxDataOrError), {
status: (cruxDataOrError.error.includes("Server configuration error")
|| cruxDataOrError.error.includes("OpenAI API Key"))
? 500
: 400, // Internal or Bad Request