Search

4,002 results found for openai (2427ms)

Code
3,898

import type { CapsuleOverviewOutput } from "../shared/types.ts";
import { useTheme, useToolOutput } from "./hooks.ts";
import { getOpenAI } from "./openai-types.ts";
function EmptyState({ isDark }: { isDark: boolean }) {
async function handleGenerateList() {
await getOpenAI()?.sendFollowUpMessage({
prompt: `Run the capsule_generate_list tool for capsule ${capsule.capsuleId} to finalize t
});
} from "react";
import type { SetStateAction } from "react";
import type { OpenAiGlobals, SetGlobalsEvent } from "./openai-types.ts";
import { getOpenAI, SET_GLOBALS_EVENT_TYPE } from "./openai-types.ts";
type UnknownObject = Record<string, unknown>;
export function useOpenAiGlobal<K extends keyof OpenAiGlobals>(
key: K,
): OpenAiGlobals[K] {
return useSyncExternalStore(
(onChange) => {
},
() => {
const api = getOpenAI();
if (!api) {
throw new Error("OpenAI widget runtime is not available");
}
return api[key];
export function useToolOutput<T = UnknownObject>(): T | null {
return useOpenAiGlobal("toolOutput") as T | null;
}
export function useTheme(): "light" | "dark" {
return useOpenAiGlobal("theme");
}
defaultState: T | (() => T),
): readonly [T, (state: SetStateAction<T>) => void] {
const widgetStateFromWindow = useOpenAiGlobal("widgetState") as T | null;
const [widgetState, _setWidgetState] = useState<T>(() => {
: stateOrUpdater;
getOpenAI()?.setWidgetState(newState);
return newState;
});
# Capsule Wardrobe Shopping List
intentional capsules with persistent Val Town templates, OpenAI-completion friendly tools, and
## Tech Stack
- **Backend**: Hono + MCP Lite + TypeScript capsule templates
- **Widget**: React 19 + TanStack Router + OpenAI Apps SDK hydration
- **Shared Types**: Zod schemas for structured content and widget state
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
import { OpenAI } from "npm:openai@4";
export default async function (req: Request): Promise<Response> {
const { image, prompt } = await req.json();
const openai = new OpenAI({
apiKey: Deno.env.get("OPENAI_API_KEY"),
});
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{
"https://console.groq.com/docs/litellm.md",
"https://console.groq.com/docs/livekit.md",
"https://console.groq.com/docs/openai.md",
"https://console.groq.com/docs/tavily.md",
// "https://console.groq.com/docs/toolhouse.md",
// "https://console.groq.com/docs/model/moonshotai/kimi-k2-instruct.md",
// "https://console.groq.com/docs/model/moonshotai/kimi-k2-instruct-0905.md",
// "https://console.groq.com/docs/model/openai/gpt-oss-120b.md",
// "https://console.groq.com/docs/model/openai/gpt-oss-20b.md",
// "https://console.groq.com/docs/model/openai/gpt-oss-safeguard-20b.md",
// "https://console.groq.com/docs/model/playai-tts.md",
// "https://console.groq.com/docs/model/playai-tts-arabic.md",
## Token Counting
counts are calculated using [tiktoken](https://github.com/openai/tiktoken) with the `gpt-4` enco
- GPT-4
- GPT-3.5-turbo
- Many other OpenAI models
Token counts are:
|----------|------|-------|------|------|
| **Mixedbread** | `mixedbread-embeddings-cosine.ts` | ~50-100ms | Free tier | High quality, 102
| **OpenAI** | `openai-cosine.ts` | ~100-200ms | Paid | High quality, reliable |
| **HuggingFace** | `hf-inference-qwen3-cosine.ts` | ~150-300ms | Free tier | Qwen3-8B model |
| **Cloudflare** | `cloudflare-bge-cosine.ts` | ~50-150ms | Free tier | Works on CF Workers |
- `types.ts` - Type definitions for search
- `utils.ts` - Shared utilities (cosine similarity, snippets)
- Multiple strategy files (transformers-local-onnx, mixedbread, openai, etc.)
- **`answer/`** - Answer strategies with pluggable RAG implementations:
- `index.ts` - Main entry point, switches between strategies
// import { searchStrategy, generateEmbeddings } from "./placeholder.ts";
// import { searchStrategy, generateEmbeddings } from "./jigsawstack-orama.ts"; // ~550ms query
// import { searchStrategy, generateEmbeddings } from "./openai-orama.ts"; // ~100-200ms query e
// import { searchStrategy, generateEmbeddings } from "./openai-cosine.ts"; // ~100-200ms query
// import { searchStrategy, generateEmbeddings } from "./mixedbread-embeddings-cosine.ts"; // Mi
// import { searchStrategy, generateEmbeddings } from "./mixedbread.ts"; // Mixedbread Stores (m
| **transformers-cosine** | ~160-180ms | Free | None (auto-download) | Development |
| **mixedbread-embeddings** | ~50-100ms | Free tier | API key | High accuracy |
| **openai-cosine** | ~100-200ms | Paid | API key | Reliability |
| **hf-inference-qwen3** | ~150-300ms | Free tier | API key | Best accuracy |
| **cloudflare-bge** | ~50-150ms | Free tier | API key | Cloudflare Workers |
- **`transformers-cosine.ts`** - Auto-download ONNX models
- **`mixedbread-embeddings-cosine.ts`** - Mixedbread API + local cosine
- **`openai-cosine.ts`** - OpenAI embeddings + local cosine
- **`hf-inference-qwen3-cosine.ts`** - HuggingFace Qwen3-8B embeddings
- **`cloudflare-bge-cosine.ts`** - Cloudflare Workers AI
├─ Need the best accuracy?
│ └─ YES → Use mixedbread-embeddings-cosine.ts or openai-cosine.ts
└─ Want managed search (no embeddings management)?
| **transformers-cosine** | ~3-5s | ~150ms | ~10-30ms | ~160-180ms | ✅ First run only |
| **mixedbread-embeddings** | N/A | N/A | ~50-100ms | ~50-100ms | ✅ Every query |
| **openai-cosine** | N/A | N/A | ~100-200ms | ~100-200ms | ✅ Every query |
| **hf-inference-qwen3** | N/A | N/A | ~150-300ms | ~150-300ms | ✅ Every query |
| **cloudflare-bge** | N/A | N/A | ~50-150ms | ~50-150ms | ✅ Every query |
| **transformers-cosine** | $0 | ∞ | 100% free, runs locally |
| **mixedbread-embeddings** | $0-$ | Generous | Free tier: 150 req/min, 100M tokens/mo |
| **openai-cosine** | $$ | Limited | $0.0001/1K tokens (text-embedding-3-small) |
| **hf-inference-qwen3** | $0 | Generous | Free tier: 1000 req/day |
| **cloudflare-bge** | $0 | Generous | Free tier: 10,000 req/day |
| **transformers-cosine** | all-MiniLM-L6-v2 | 384 | ~58 | Same as local |
| **mixedbread-embeddings** | mxbai-embed-large-v1 | 1024 | ~64 | Higher quality |
| **openai-cosine** | text-embedding-3-small | 1536 | ~62 | Reliable, tested |
| **hf-inference-qwen3** | Qwen3-Embedding-8B | 768 | ~65 | Very high quality |
| **cloudflare-bge** | bge-large-en-v1.5 | 1024 | ~64 | Good quality |
```typescript
// Before
// import { searchStrategy, generateEmbeddings } from "./openai-cosine.ts";
// After
| **Best for Cloudflare** | cloudflare-bge-cosine |
| **Best for Val.town** | transformers-cosine |
| **Most Reliable** | openai-cosine |
| **Fully Managed** | mixedbread or jigsawstack |