• Townie
    AI
  • Blog
  • Docs
  • Pricing
  • We’re hiring!
Log inSign up
yawnxyz

yawnxyz

groq-docs

Public
Like
groq-docs
Home
Code
14
answer
9
data
search
16
testing
7
utils
1
.vtignore
AGENTS.md
README.md
deno.json
groq.ts
H
main.tsx
todo.md
urls.ts
utils.ts
Branches
1
Pull requests
Remixes
History
Environment variables
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
search
/
models
/
all-MiniLM-L6-v2
/
README.md
Code
/
search
/
models
/
all-MiniLM-L6-v2
/
README.md
Search
…
Viewing readonly version of main branch: v102
View latest version
README.md
base_model:
sentence-transformers/all-MiniLM-L6-v2
library_name:
transformers.js
license:
apache-2.0

https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @huggingface/transformers

You can then use the model to compute embeddings like this:

Create val
import { pipeline } from '@huggingface/transformers'; // Create a feature-extraction pipeline const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2'); // Compute sentence embeddings const sentences = ['This is an example sentence', 'Each sentence is converted']; const output = await extractor(sentences, { pooling: 'mean', normalize: true }); console.log(output); // Tensor { // dims: [ 2, 384 ], // type: 'float32', // data: Float32Array(768) [ 0.04592696577310562, 0.07328180968761444, ... ], // size: 768 // }

You can convert this Tensor to a nested JavaScript array using .tolist():

Create val
console.log(output.tolist()); // [ // [ 0.04592696577310562, 0.07328180968761444, 0.05400655046105385, ... ], // [ 0.08188057690858841, 0.10760223120450974, -0.013241755776107311, ... ] // ]

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).

FeaturesVersion controlCode intelligenceCLI
Use cases
TeamsAI agentsSlackGTM
DocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareers
We’re hiring!
Brandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Open Source Pledge
Terms of usePrivacy policyAbuse contact
© 2025 Val Town, Inc.