Search

3,377 results found for openai (2949ms)

Code
3,282

* Users input their birth details, a sign to focus on, and a life domain.
* The backend then uses the "Astrologer Prompt" (a detailed system prompt)
* to query an OpenAI model, which generates a comprehensive astrological report.
*
* Core Logic:
* 2. Backend (Deno Val Function):
* - Receives these inputs.
* - Constructs a user message for the OpenAI API. This message includes
* the raw birth details, focus sign, domain, etc.
* - Uses the **ENTIRE** "Astrologer Prompt" (with {{sign}} and {{domain}}
* placeholders filled) as the system prompt for an OpenAI API call.
* - Calls a powerful OpenAI model (e.g., gpt-4o).
* - Receives the structured JSON astrological report from OpenAI.
* - Sends this report back to the client for display.
*
* May 28, 2025. The LLM primed by it is assumed to have access to or
* knowledge of transit data for this date.
* - OpenAI API Key: An `OPENAI_API_KEY` environment variable must be available
* in the Val Town environment for `std/openai` to work.
*
* Inspired by the structure of the "Goal-Oriented Multi-Agent Stock Analysis Val".
// --- Imports ---
import { OpenAI } from "https://esm.town/v/std/openai";
// NOTE: Deno.env is used directly for environment variables.
}
// --- THE ASTROLOGER PROMPT (System Prompt for OpenAI) ---
// This will be used by the backend to instruct the AI.
// Placeholders {{sign}} and {{domain}} will be replaced dynamically.
}
// --- Helper Function: Call OpenAI API (Adapted - Robust error handling retained) ---
async function callOpenAI(
systemPrompt: string,
userMessage: string,
// Simple hash for prompt might not be as useful if {{placeholders}} change content significan
// Consider logging snippet of system prompt if needed for debugging.
const logPrefix = `OpenAI Call [${callId}] (${model}, JSON: ${isJsonOutputRequired})`;
log(
`[INFO] ${logPrefix}: Initiating... System prompt (template used). User message snippet: ${
try {
const openai = new OpenAI(); // API Key from environment
const response = await openai.chat.completions.create({
model: model,
messages: [{ role: "system", content: systemPrompt }, { role: "user", content: userMessage
const content = response.choices?.[0]?.message?.content;
if (!content) {
log(`[ERROR] ${logPrefix}: OpenAI API returned unexpected or empty response structure.`);
throw new Error("Received invalid or empty response content from AI model.");
}
log(`[SUCCESS] ${logPrefix}: OpenAI call successful.`);
return { role: "assistant", content: content };
} catch (error) {
if (errorResponseData?.message) {
errorMessage = `OpenAI Error (${statusCode || "unknown status"}): ${errorResponseData.mess
} else if (errorResponseData?.error?.message) {
errorMessage = `OpenAI Error (${statusCode || "unknown status"}): ${errorResponseData.erro
}
// ... (retain other specific error message constructions from original Val)
.replace(new RegExp("{{domain}}", "g"), inputs.focusDomain);
// 2. Construct the User Message for the OpenAI API call
// The Astrologer Prompt expects `birth_chart_data`. We will pass raw birth details
// and let the LLM (primed with Astrologer Prompt) handle interpretation.
);
// 3. Call OpenAI
log("[STEP] Calling OpenAI with Astrologer Prompt...");
// Using gpt-4o as it's capable and the astrological prompt is complex.
// The ASTROLOGER_SYSTEM_PROMPT_TEMPLATE implies the model should generate JSON.
const aiResponse = await callOpenAI(populatedSystemPrompt, userMessageJson, "gpt-4o", true, lo
// 4. Parse and Return Result
// (e.g., if the AI itself couldn't perform the analysis and returned an error structure as
// The ASTROLOGER_SYSTEM_PROMPT_TEMPLATE does not explicitly define an error structure from
// but callOpenAI returns its own {"error": "..."} if the call itself failed.
rror && aiResponse.role === "system") { // Error from callOpenAI wrapper
log(`[ERROR] OpenAI call wrapper reported an error: ${parsedAiResponse.error}`);
return { error: "Failed to get report from Astrologer AI.", details: parsedAiResponse.erro
}
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
import { OpenAI } from "https://esm.town/v/std/openai";
import type { LLMRequest, LLMResponse, SystemDesignRequest, SOARESystem, SystemComponent } from
export class SOAREBrain {
private openai: OpenAI;
constructor() {
this.openai = new OpenAI();
}
async reason(request: LLMRequest): Promise<LLMResponse> {
try {
const completion = await this.openai.chat.completions.create({
model: "gpt-4o",
messages: [
import { OpenAI } from "https://esm.town/v/std/openai";
import * as cheerio from "https://esm.sh/cheerio@1.0.0-rc.12";
import { EarningOpportunity, WebPageContent, AnalysisResult } from "../shared/types.ts";
const openai = new OpenAI();
export async function analyzeWebPage(url: string, focus?: string): Promise<AnalysisResult> {
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
Get/Kay/Main.tsx
20 matches
* Uses 'npm:pdf.js-extract' for PDF extraction.
* Serves HTML UI & API endpoint from the same Val.
* Assumes 'openai' secret is set in Val Town environment variables.
*
* Last Updated: {{current_date}} (Templated Version)
* max_pdf_size_mb: {{max_pdf_size_mb}}, // e.g., 10
* text_truncation_length: {{text_truncation_length}}, // e.g., 25000
* openai_model_name: "{{openai_model_name}}", // e.g., "gpt-4o"
* contact_form_placeholders_en: { name: "Your Name", email: "Your Email", message: "Message" },
* contact_form_placeholders_es: { name: "Tu Nombre", email: "Tu Correo", message: "Mensaje" },
export default async function(req: Request) {
// --- Dynamic Imports (Unchanged) ---
const { OpenAI } = await import("https://esm.town/v/std/openai");
// const { z } = await import("npm:zod"); // Zod might be optional if config is trusted
const { fetch } = await import("https://esm.town/v/std/fetch");
// --- CONFIGURATION (These would be replaced by the template variables at generation time) --
const APP_CONFIG = `\{{\app_config_json}}`; // e.g., { openai_model_name: "gpt-4o", text_trunc
const ANALYSIS_AGENTS = `\{\{analysis_agents_json}}`; // Array of agent objects
async function extractPdfTextNative(data: ArrayBuffer, fileName: string, log: LogEntry[]): Pro
// --- Helper Function: Call OpenAI API (Uses APP_CONFIG for model) ---
async function callOpenAI(
openai: OpenAI,
systemPrompt: string,
userMessage: string,
modelFromConfig = APP_CONFIG.openai_model_name || "gpt-4o", // Use configured model
expectJson = false,
): Promise<{ role: "assistant" | "system"; content: string | object }> {
/* ... original logic, but use modelFromConfig ... */
const model = modelFromConfig;
// ... rest of the original callOpenAI function
try {
const response = await openai.chat.completions.create({
model,
messages: [{ role: "system", content: systemPrompt }, { role: "user", content: userMessa
log: LogEntry[],
): Promise<LogEntry[]> {
const openai = new OpenAI();
log.push({ agent: "System", type: "step", message: "Workflow started." });
// ... initial logging of input type ...
// If chaining is needed, {{previous_output}} could be another placeholder in prompts.
const agentResult = await callOpenAI(
openai,
agentSystemPrompt, // The agent's specific prompt
truncText, // User message is the doc text itself, or could be empty if prompt is self-c
APP_CONFIG.openai_model_name,
agentConfig.expects_json
);
* 1. Define Application Configuration:
* Fill in the \`{{app_config_json}}\` placeholder with general settings for your app
* (e.g., OpenAI model, max file size, default language).
*
* 2. Define Analysis Agents:
* - `agent_id`: A unique machine-readable ID.
* - `agent_name_en`/`agent_name_es`: Human-readable names for UI and logs.
* - `system_prompt`: The OpenAI prompt for this agent. Can use `{{document_text}}`.
* - `expects_json`: Boolean, if the prompt asks OpenAI for JSON output.
* - `ui_display_info`: How to render this agent's results:
* - `card_title_en`/`card_title_es`: Title for the results card.
* and `{{app_config.document_format_accepted_label}}` (e.g. "PDF") for UI text.
*
* 5. OpenAI API Key:
* Ensure your environment (e.g., Val Town secrets) has the `OPENAI_API_KEY` (or the appropriate
* environment variable name for the `OpenAI` library) set.
*
* 6. Deployment:
### Tech Stack
- **Backend**: Hono.js for API routing
- **AI**: OpenAI GPT-4o-mini for content analysis
- **Web Scraping**: Cheerio for HTML parsing
- **Frontend**: Vanilla JavaScript with TailwindCSS
## 🔧 Environment Setup
The analyzer uses OpenAI's API which is automatically configured in Val Town. No additional setu
## 📊 What It Analyzes
import { Hono } from "https://esm.sh/hono@3.11.7";
import { OpenAI } from "https://esm.town/v/std/openai";
import { readFile } from "https://esm.town/v/std/utils/index.ts";
import { analyzeWebPage } from "./analyzer.ts";
}
if (error.message.includes("OpenAI") || error.message.includes("API key")) {
return c.json({ error: "AI analysis service requires API key configuration. Please contact
}
Get/select/main.ts
16 matches
// SERVER-SIDE LOGIC (TypeScript)
// =============================================================================
import { OpenAI } from "https://esm.town/v/std/openai";
// --- Configuration ---
maskSrc?: string;
}
interface OpenAIResponse {
races: RaceInfo[];
}
];
// --- OpenAI Generation Function ---
async function generateRaceDataWithOpenAI(): Promise<RaceInfo[]> {
const openai = new OpenAI();
const numToRequest = Math.max(1, NUM_CARDS_TO_GENERATE);
const prompt =
Return STRICTLY as a single JSON object: { "races": [ { race1 }, { race2 }, ... ] }. No introduc
try {
console.info(`Requesting ${numToRequest} race data generation from OpenAI...`);
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
const rawContent = completion.choices[0]?.message?.content;
if (!rawContent) throw new Error("OpenAI returned an empty response message.");
let parsedJson;
parsedJson = JSON.parse(rawContent);
} catch (parseError) {
console.error("Failed to parse OpenAI JSON response:", parseError);
console.error("Raw OpenAI response:", rawContent);
throw new Error(`JSON Parsing Error: ${parseError.message}`);
}
) {
console.warn(
`OpenAI response JSON failed validation for ${numToRequest} races:`,
JSON.stringify(parsedJson, null, 2),
);
throw new Error(
"OpenAI response JSON structure, count, data types, color format, hint value, or mask UR
);
}
const generatedData = (parsedJson as OpenAIResponse).races.map(race => ({
...race,
borderAnimationHint: race.borderAnimationHint || "none",
}));
enerated and validated ${generatedData.length} races from OpenAI.`);
return generatedData;
} catch (error) {
console.error("Error fetching or processing data from OpenAI:", error);
console.warn("Using fallback race data due to the error.");
return fallbackRaceData.slice(0, numToRequest).map(race => ({
// --- Main HTTP Handler (Val Town Entry Point) ---
export default async function server(request: Request): Promise<Response> {
const activeRaceData = await generateRaceDataWithOpenAI();
const css = `
import { OpenAI } from "https://esm.town/v/std/openai";
import { type Context, Hono } from "npm:hono";
import { paymentMiddleware } from "npm:x402-hono";
const openai = new OpenAI();
const app = new Hono();
app.get("/jokes", async (c: Context) => {
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Tell a punny programming joke" },
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },