Skip to main content
Version: Canary 🐤

Embedding: thenlper/gte-large

Changes to Anyscale Endpoints API

Effective August 1, 2024 Anyscale Endpoints API will be available exclusively through the fully Hosted Anyscale Platform. Multi-tenant access to LLM models will be removed.

With the Hosted Anyscale Platform, you can access the latest GPUs billed by the second, and deploy models on your own dedicated instances. Enjoy full customization to build your end-to-end applications with Anyscale. Get started today.

info

See the the Hugging Face model page for more model details.

See Generate an embedding for how to use the embedding model with Anyscale Endpoints.

About this model

Model name to use in API calls:

thenlper/gte-large

The GTE models completed training on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be useful for various downstream tasks of text embeddings, including information retrieval, semantic textual similarity, text reranking, etc.

Model Developers: Hugging Face

Input Models: input text only.

Output Models: embedding of the text only.

Maximum Input Length: 512

License: MIT