Search results
OpenAI - Docs ↗
Use OpenAI's chat completion API with std/openai
. This integration enables access to OpenAI's language models without needing to acquire API keys.
For free Val Town users, all calls are sent to gpt-3.5-turbo
.
Streaming is not yet supported. Upvote the HTTP response streaming feature request if you need it!
Usage
Create valimport { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
],
model: "gpt-4",
max_tokens: 30,
});
console.log(completion.choices[0].message.content);
Limits
While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind:
- Usage Quota: We limit each user to 10 requests per minute.
- Features: Chat completions is the only endpoint available.
If these limits are too low, let us know! You can also get around the limitation by using your own keys:
- Create your own API key on OpenAI's website
- Create an environment variable named
OPENAI_API_KEY
- Use the
OpenAI
client fromnpm:openai
:
OpenAI
Get started using OpenAI's chat completion without the need to set your own API keys.
Usage
Here's a quick example to get you started with the Val Town OpenAI wrapper:
Create valimport { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
"messages": [
{ "role": "user", "content": "Say hello in a creative way" },
],
model: "gpt-4",
max_tokens: 30,
});
console.log(functionExpression.choices[0].message.content);
OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
Create valimport { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat("Hello, GPT!");
console.log(content);
Create valimport { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat(
[
{ role: "system", content: "You are Alan Kay" },
{ role: "user", content: "What is the real computer revolution?"}
],
{ max_tokens: 50, model: "gpt-4o" }
);
console.log(content);