Public
Script
stevekrouse avatar
roseDragon
@stevekrouse
Script
Forked from stevekrouse/uploadTo0x0
dm3agv avatar
joinMatrixRoom
@dm3agv
Script
Forked from vlad/joinMatrixRoom
adjacent avatar
searchManifoldMarkets
@adjacent
Script
Forked from piotr/searchManifoldMarkets
deblina avatar
arenaApiExample
@deblina
Script
Forked from tmcw/arenaApiExample
rcurtiss avatar
bsky_rss_poll
@rcurtiss
Script
Forked from jordan/bsky_rss_poll
svoisen avatar
fetchRss
@svoisen
Script
Forked from pdebie/fetchRss
dougwithseismic avatar
shitIsMySiteDown
@dougwithseismic
Script
An interactive, runnable TypeScript val by dougwithseismic
nbbaier avatar
vtApiClient
@nbbaier
Script
An interactive, runnable TypeScript val by nbbaier
ychahwan avatar
duckdbExample
@ychahwan
Script
Forked from hamilton/duckdbExample
treb0r avatar
weatherGPT
@treb0r
Cron
Forked from ellenchisa/weatherGPT
tempguy avatar
plumOwl
@tempguy
HTTP (deprecated)
An interactive, runnable TypeScript val by tempguy
thejian avatar
isMyWebsiteDown
@thejian
Script
Forked from healeycodes/isMyWebsiteDown
nbbaier avatar
perplexityAPI
@nbbaier
Script
Perplexity API Wrapper This val exports a function pplx that provides an interface to the Perplexity AI chat completions API. You'll need a Perplexity AI API key, see their documentation for how to get started with getting a key. By default, the function will use PERPLEXITY_API_KEY in your val town env variables unless overridden by setting apiKey in the function. pplx(options: PplxRequest & { apiKey?: string }): Promise<PplxResponse> Generates a model's response for the given chat conversation. Required parameters in options are the following (for other parameters, see the Types section below): model ( string ): the name of the model that will complete your prompt. See below for possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . messages ( Message[] ): A list of messages comprising the conversation so far. A message object must contain role ( system , user , or assistant ) and content (a string). You can also specify an apiKey to override the default Deno.env.get("PERPLEXITY_API_KEY") . The function returns an object of types PplxResponse , see below. Types PplxRequest Request object sent to Perplexity models. | Property | Type | Description | |---------------------|----------------------|-----------------------------------------------------------------| | model | Model | The name of the model that will complete your prompt. Possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . | | messages | Message[] | A list of messages comprising the conversation so far. | | max_tokens | number | (Optional) The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window. | | temperature | number | (Optional) The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. You should either set temperature or top_p, but not both. | | top_p | number | (Optional) The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. You should either alter temperature or top_p, but not both. | | top_k | number | (Optional) The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. | | stream | boolean | (Optional) Flag indicating whether to stream the response. | | presence_penalty | number | (Optional) A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. | | frequency_penalty | number | (Optional) A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty. | PplxResponse Response object for pplx models. | Property | Type | Description | |------------|-------------------------|--------------------------------------------------| | id | string | The ID of the response. | | model | Model | The model used for generating the response. | | object | "chat.completion" | The type of object (always "chat.completion"). | | created | number | The timestamp indicating when the response was created. | | choices | CompletionChoices[] | An array of completion choices. | Please refer to the code for more details and usage examples of these types. Message Represents a message in a conversation. | Property | Type | Description | |------------|-----------------------|--------------------------------------------------------| | role | "system" \| "user" \| "assistant" | The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user. | | content | string | The contents of the message in this turn of conversation. | CompletionChoices The list of completion choices the model generated for the input prompt. | Property | Type | Description | |-------------------|-----------------------------------|-----------------------------------------------------| | index | number | The index of the choice. | | finish_reason | "stop" \| "length" | The reason the model stopped generating tokens. Possible values are stop if the model hit a natural stopping point, or length if the maximum number of tokens specified in the request was reached. | | message | Message | The message generated by the model. | | delta | Message | The incrementally streamed next tokens. Only meaningful when stream = true. |
fil avatar
earthquakes
@fil
HTTP
Earthquake map 🌏 This val loads earthquake data from USGS, a topojson file for the land shape, and supporting libraries. It then creates a map and save it as a SVG string. The result is cached for a day. Note that we must strive to keep it under val.town’s limit of 100kB, hence the heavy simplification of the land shape. (For a simpler example, see becker barley .) | | | |-----|-----| | Web page | https://fil-earthquakes.web.val.run/ | | Observable Plot | https://observablehq.com/plot/ | | linkedom | https://github.com/WebReflection/linkedom | | topojson | https://github.com/topojson/topojson | | earthquakes | https://earthquake.usgs.gov | | world | https://observablehq.com/@visionscarto/world-atlas-topojson | | css | https://milligram.io/ |
stevekrouse avatar
fetchJSON_example
@stevekrouse
Script
Forked from vladimyr/fetchJSON_example
rvorias avatar
turquoiseLlama
@rvorias
HTTP (deprecated)
An interactive, runnable TypeScript val by rvorias
April 3, 2024