Search

136 results found for embeddings (1263ms)

Code
127

"slug": "val-vibes",
"link": "/blog/val-vibes",
"description": "How to build semantic search with embeddings for Val Town within Val Town it
"pubDate": "Tue, 18 Jun 2024 00:00:00 GMT",
"author": "JP Posma",
import { searchEmojis } from "https://esm.town/v/maxm/emojiVectorEmbeddings";
import { extractValInfo } from "https://esm.town/v/pomdtr/extractValInfo";
import { Hono } from "npm:hono@3.2.7";
Use plain language to search for emojis. Get great results.
<br />
Built on Val Town with sqlite vector search and openai embeddings.
<br />
Fork the <a href="${htmlUrl}" target="_blank">source</a> and build your own!
import { searchEmojis } from "https://esm.town/v/maxm/emojiVectorEmbeddings";
import { extractValInfo } from "https://esm.town/v/pomdtr/extractValInfo";
import { Hono } from "npm:hono@3.2.7";
Use plain language to search for emojis. Get great results.
<br />
Built on Val Town with sqlite vector search and openai embeddings.
<br />
Fork the <a href="${htmlUrl}" target="_blank">source</a> and build your own!
# Emoji Instant Search
Uses vector embeddings to get "vibes" search on emojis
async function getEmbedding(emoji: string): Promise<number[]> {
const result = await openai.embeddings.create({
input: emoji,
model: "text-embedding-3-small",
};
const embeddings: EmojiEmbedding[] = [];
// Calculate cosine similarity between two vectors
for (const emoji of emojisWithInfo) {
embeddings.push({ emoji, embedding: await getEmbedding(emoji) });
}
function findNearestNeighbors(
targetEmbedding: number[],
allEmbeddings: EmojiEmbedding[],
k: number = 50,
): { emoji: string; similarity: number }[] {
return allEmbeddings
.map(entry => ({
emoji: entry.emoji,
}
const toSearch = embeddings.find((r) => (r.emoji === emojiToString(["🐻", emojis["🐻"]])))!;
console.log(findNearestNeighbors(toSearch.embedding, embeddings));
// Generate embedding for a given text
async function generateEmbedding(text: string): Promise<number[]> {
const response = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: text,
```
async function calculateEmbeddings(text) {
const url = `https://yawnxyz-ai.web.val.run/generate?embed=true&value=${encodeURIComponent(t
return data;
} catch (error) {
console.error('Error calculating embeddings:', error);
return null;
}
wizos/ai/README.md
2 matches
```
async function calculateEmbeddings(text) {
const url = `https://yawnxyz-ai.web.val.run/generate?embed=true&value=${encodeURIComponent(t
return data;
} catch (error) {
console.error('Error calculating embeddings:', error);
return null;
}
/** @jsxImportSource npm:hono@3/jsx */
import bots from "https://esm.town/v/tmcw/surprisingEmbeddings/bots"
import * as v from "jsr:@valibot/valibot"
import { Hono } from "npm:hono"
<>
<p>
Embeddings. They're one of the parts of the LLM/AI wave that I sort of like.
</p>
<p>
</p>
<p>
Embeddings are pretty cool when they work, because they sort of capture the idea of{" "}
<a href="https://blog.val.town/blog/val-vibes/">'vibes', which makes them useful for searc
Someone can search for 'lettuce' and get results for 'spinach' too because they're similar
<p>
This project is based on this{" "}
https://www.linkedin.com/pulse/insanity-relying-vector-embeddings-why-rag-fails-michael-wood-4ie
blog post I read last year
</a>{" "}
if (inputResult.success) {
const input = inputResult.output
const embeddings = await client.embed({
input,
model: "voyage-3-lite",
return {
word,
embedding: embeddings.data?.at(i)?.embedding,
}
})
<html>
<head>
<title>Surprising embeddings</title>
<link rel="stylesheet" href="https://unpkg.com/missing.css@1.1.3" />
<header>
<h3>
Surprising embeddings
<sub-title>
Using <a href="https://www.voyageai.com/">Voyage AI</a> voyage-3-lite model
>
</script>
t type="module" src="https://esm.town/v/tmcw/surprisingEmbeddings/visualization"></script>
</div>
<html>
<head>
<title>Surprising embeddings</title>
<link rel="stylesheet" href="https://unpkg.com/missing.css@1.1.3" />
<header>
<h3>
Surprising embeddings
<sub-title>
Using <a href="https://www.voyageai.com/">Voyage AI</a> voyage-3-lite model
const input = v.parse(inputSchema, ["dogs", "cats", "felines", "canines"])
const embeddings = await client.embed({
input,
model: "voyage-3-lite",
return {
word,
embedding: embeddings.data?.at(i)?.embedding,
}
})
tmcw
surprisingEmbeddings
Visualizing embedding distances
maxm
emojiVectorEmbeddings
 
janpaul123
blogPostEmbeddingsDimensionalityReduction
 
janpaul123
compareEmbeddings
 
yawnxyz
embeddingsSearchExample
 

Users

No users found
No docs found