LLMosaic provides a /v1/embeddings endpoint that accepts raw text input and returns high-dimensional vector embeddings. The endpoint is fully compatible with OpenAI client libraries, making it easy to integrate into existing workflows.
You’ll learn how to: - Submit text or batched input for embedding - Interpret the embedding vector output - Use the output for semantic search and retrieval
**You must be logged in to run the tutorials in the browser.**
curl-XPOST"${EMBED_BASE}/${EMBED_MODEL}/v1/embeddings"\-H"Authorization: Bearer ${EMBED_KEY}"\-H"Content-Type: application/json"\-d'{ "model":"'"${EMBED_MODEL}"'", "input":["Your text here"] }'
importrequestsresp=requests.post(f"{EMBED_BASE}/{EMBED_MODEL}/v1/embeddings",headers={"Authorization":f"Bearer {EMBED_KEY}","Content-Type":"application/json"},json={"model":EMBED_MODEL,"input":["Your text here"]})print(resp.json())
constres=awaitfetch(`${EMBED_BASE}/${EMBED_MODEL}/v1/embeddings`,{method:"POST",headers:{"Authorization":`Bearer ${EMBED_KEY}`,"Content-Type":"application/json"},body:JSON.stringify({model:EMBED_MODEL,input:["Your text here"]})});console.log(awaitres.json());