Skip to main content
Version: Latest

Chat: mlabonne/NeuralHermes-2.5-Mistral-7B

Check your docs version

These docs are for the new Anyscale design. If you started using Anyscale before April 2024, use Version 1.0.0 of the docs. If you're transitioning to Anyscale Preview, see the guide for how to migrate.

info

See the Hugging Face model page for more model details.

About this model

Model name to use in API calls:

mlabonne/NeuralHermes-2.5-Mistral-7B

The mlabonne/NeuralHermes-2.5-Mistral-7B Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-Instruct-v0.1 generative text model using a variety of publicly available conversation datasets.

Model Developers: Maxime Labonne

Input Models: input text only.

Output Models: generate text only.

Model ArchitectureMistral-7B-Instruct-v0.1, a transformer model, serves as the base for this instruction model, and it has the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Context Length: 16384

License: Apache 2.0