Search

3,305 results found for openai (1725ms)

Code
3,210

# OpenAI Proxy
This OpenAI API proxy injects Val Town's API keys. For usage documentation,
check out https://www.val.town/v/std/openai
Adapted from https://blog.r0b.io/post/creating-a-proxy-with-deno/
import { parseBearerString } from "https://esm.town/v/andreterron/parseBearerString";
import { API_URL } from "https://esm.town/v/std/API_URL?v=5";
import { OpenAIUsage } from "./usage.ts";
const client = new OpenAIUsage();
const allowedPathnames = [
// Proxy the request
const url = new URL("." + pathname, "https://api.openai.com");
url.search = search;
const headers = new Headers(req.headers);
headers.set("Host", url.hostname);
headers.set("Authorization", `Bearer ${Deno.env.get("OPENAI_API_KEY")}`);
headers.set("OpenAI-Organization", Deno.env.get("OPENAI_API_ORG"));
const modifiedBody = await limitFreeModel(req, user);
});
const openAIRes = await fetch(url, {
method: req.method,
headers,
// Remove internal header
const res = new Response(openAIRes.body, openAIRes);
res.headers.delete("openai-organization");
return res;
}
// AIMLAPI client (OpenAI-compatible) for Bagoodex Web Search
// Server-only module: do not import in client code
join/Mold/main.tsx
3 matches
// This version asks the user for additional context to provide a more comprehensive report.
import { OpenAI } from "https://esm.town/v/std/openai";
import { Hono } from "npm:hono@4.4.12";
import type { Context } from "npm:hono@4.4.12";
`.trim();
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
// Settings configuration
const settings = {
model: 'openai/gpt-oss-120b',
stream: false,
reasoningEffort: 'low'
language,
offset = 0,
model = 'openai/gpt-oss-120b',
reasoning_effort = 'medium',
tools = [{ type: 'browser_search' }],
language,
offset = 0,
model = 'openai/gpt-oss-120b',
reasoning_effort = 'medium',
tools = [{ type: 'browser_search' }],
start: async (controller) => {
try {
const upstream = await fetch("https://api.groq.com/openai/v1/chat/completions", {
method: 'POST',
headers: {
import "jsr:@std/dotenv/load";
const API_URL = "https://api.groq.com/openai/v1/chat/completions";
function getApiKey() {
},
],
model: "openai/gpt-oss-120b",
stream: true,
reasoning_effort: "medium",
export async function groqChatCompletion(apiKey, payload) {
console.log('>>> [groqChatCompletion] Payload:', payload);
const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
method: 'POST',
headers: {
try {
const res = await groqChatCompletion(apiKey, {
model: 'openai/gpt-oss-120b',
messages: [
{ role: 'system', content: 'Classify the user request as either links or text. Respond w
// Converts arbitrary text into a strict JSON object and returns the `results` array
export async function extractOrRepairJsonResults(rawText, apiKey, language = 'english') {
const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
method: 'POST',
headers: {
},
body: JSON.stringify({
model: 'openai/gpt-oss-120b',
messages: [
{
// This helper instructs the model to ONLY use URLs that appear in the provided text and never i
export async function extractResultsFromToolOutputs(toolText, apiKey, language = 'english') {
const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
method: 'POST',
headers: {
},
body: JSON.stringify({
model: 'openai/gpt-oss-120b',
messages: [
{
// Structured summary extraction to eliminate citation artifacts and enforce clean fields
export async function extractStructuredSummary(rawText, apiKey) {
const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
method: 'POST',
headers: {
},
body: JSON.stringify({
model: 'openai/gpt-oss-120b',
messages: [
{
if (reasoningText && String(reasoningText).trim()) parts.push('Reasoning text:\n' + String(r
const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
method: 'POST',
headers: {
},
body: JSON.stringify({
model: 'openai/gpt-oss-120b',
messages: [
{
if (reasoningText && String(reasoningText).trim()) parts.push('Reasoning text:\n' + String(r
const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
method: 'POST',
headers: {
},
body: JSON.stringify({
model: 'openai/gpt-oss-120b',
messages: [
{
std/openai/README.md
15 matches
# OpenAI - [Docs ↗](https://docs.val.town/std/openai)
Use OpenAI's chat completion API with
[`std/openai`](https://www.val.town/v/std/openai). This integration enables
access to OpenAI's language models without needing to acquire API keys.
Val Town free users can use any cheap model – where output is less than $1 per
This SDK is powered by
[our openapiproxy](https://www.val.town/x/std/openaiproxy).
## Basic Usage
```ts title="Example" val
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
## Limits
While our wrapper simplifies the integration of OpenAI, there are a few
limitations to keep in mind:
1. Create your own API key on
[OpenAI's website](https://platform.openai.com/api-keys)
2. Create an
[environment variable](https://www.val.town/settings/environment-variables?adding=true)
named `OPENAI_API_KEY`
3. Use the `OpenAI` client from `npm:openai`:
```ts title="Example" val
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
```
com/val-town/val-town-docs/edit/main/src/content/docs/std/openai.mdx)
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },