Use OpenAI's chat completion API with std/openai
. This integration enables access to OpenAI's language models without needing to acquire API keys.
For free Val Town users, all calls are sent to gpt-4o-mini
.
import {
OpenAI }
from "https://esm.town/v/std/openai";
const openai =
new OpenAI();
const completion =
await openai.
chat.
completions.
create({
messages: [
{
role:
"user",
content:
"Say hello in a creative way" },
],
model:
"gpt-4",
max_tokens:
30,
});
console.
log(completion.
choices[
0].
message.
content);
While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind:
- Usage Quota: We limit each user to 10 requests per minute.
- Features: Chat completions is the only endpoint available.
If these limits are too low, let us know! You can also get around the limitation by using your own keys:
- Create your own API key on OpenAI's website
- Create an environment variable named
OPENAI_API_KEY
- Use the
OpenAI
client from npm:openai
:
import {
OpenAI }
from "npm:openai";
const openai =
new OpenAI();
π Edit docs