Sunday, June 16, 2024

Mistral Small: Powerful and Affordable AI for Everyday Tasks

Mistral Models Summary

Mistral offers two different kinds of models: optimised commercial models (Mistral Small, Mistral Medium, Mistral Large, and Mistral Embeddings) and open-weights models (Mistral 7B, Mixtral 8x7B, and Mixtral 8x22B).

  1. The very efficient open-weights models can be obtained with an Apache 2 licence that is completely permissive. They are fast, controllable, and portable, making them perfect for customisation like fine-tuning.
  2. Conversely, the commercially available optimised variants are made for optimal performance and come with a variety of deployment options.
ModelAvailable Open-weightAvailable via APIDescriptionMax TokensAPI Endpoints
Mistral 7B✔️✔️The first dense model released by Mistral AI, perfect for experimentation, customization, and quick iteration. At the time of the release, it matched the capabilities of models up to 30B parameters.32kopen-mistral-7b
Mixtral 8x7B✔️✔️A sparse mixture of experts model. As such, it leverages up to 45B parameters but only uses about 12B during inference, leading to better inference throughput at the cost of more vRAM. 32kopen-mixtral-8x7b
Mixtral 8x22B✔️✔️A bigger sparse mixture of experts model. As such, it leverages up to 141B parameters but only uses about 39B during inference, leading to better inference throughput at the cost of more vRAM. 64kopen-mixtral-8x22b
Mistral Small✔️Suitable for simple tasks that one can do in bulk (Classification, Customer Support, or Text Generation)32kmistral-small-latest
Mistral Medium
(will be deprecated in the coming months)
✔️Ideal for intermediate tasks that require moderate reasoning (Data extraction, Summarizing a Document, Writing emails, Writing a Job Description, or Writing Product Descriptions)32kmistral-medium-latest
Mistral Large✔️Our flagship model that’s ideal for complex tasks that require large reasoning capabilities or are highly specialized (Synthetic Text Generation, Code Generation, RAG, or Agents). 32kmistral-large-latest
Mistral Embeddings✔️A model that converts text into numerical vectors of embeddings in 1024 dimensions. Embedding models enable retrieval and retrieval-augmented generation applications. It achieves a retrieval score of 55.26 on MTEB.8kmistral-embed

Mistral AI API Pricing

Open source models

Open-mistral-7bA 7B transformer model, fast-deployed and easily customisable.$0.25 /1M tokens$0.25 /1M tokens
Open-mixtral-8x7bA 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total.$0.7 /1M tokens$0.7 /1M tokens
Open-mixtral-8x22bMixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B.$2 /1M tokens$6 /1M tokens

Optimized models

Mistral-smallCost-efficient reasoning for low-latency workloads.$1 /1M tokens$3 /1M tokens
Mistral-mediumWill soon be deprecated$2.7 /1M tokens$8.1 /1M tokens
Mistral-largeTop-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family.$4 /1M tokens$12 /1M tokens


Mistral-embedState-of-the-art semantic for extracting representation of text extracts.$0.1 /1M tokens$0.1 /1M tokens

Mistral AI API

Versions of the Mistral AI API have designated release dates. Use the outdated versions of the Mistral AI API to avoid any disruptions caused by breaking changes and model updates. Additionally, in the upcoming months, be ready for several endpoints to be deprecated.

The specifics of the available versions are as follows:

  • Currently, open-mistral-7b points to mistral-tiny-2312. It was formerly known as mistral-tiny, but that name will soon be obsolete.
  • Currently, open-mixtral-8x7b points to mistral-small-2312. It was formerly known as mistral-small, but that name will soon be obsolete.
  • The address of open-mixtral-8x22b is open-mixtral-8x22b-2404.
  • Mistral-small-latest: mistral-small-2402 is the current point of reference.
  • Currently, mistral-medium-latest leads to mistral-medium-2312. Mistral-medium-2312 is the new date and tag for the previous mistral-medium. Mistral Medium will soon be phased out.
  • As of right now, mistral-large-latest points to mistral-large-2402.

Benchmark outcomes

Out of all the models that are commonly accessible via an API, Mistral comes in second. It does exceptionally well in multilingual activities and code generation, and it has superior reasoning capabilities.

The benchmark results are available in the blog postings below:

  • On every benchmark, Mistral 7B performs better than Llama 2 13B and Llama 1 34B.
  • Mixtral 8x7B: meets or beats GPT3.5 on the majority of typical benchmarks and exceeds Llama 2 70B with six times faster inference. It performs well in code generation and supports English, French, Italian, German, and Spanish.
  • Mixtral 8x22B: the best open model we have. It performs admirably on tasks involving coding and can handle English, French, Italian, German, and Spanish. takes care of function call handling natively.
  • Mistral Large is a state-of-the-art reasoning-capable text generation model. Complex multilingual reasoning tasks, including as text understanding, transformation, and code production, can be accomplished with it.

Selecting a model

Mistral AI will examine several things to think about and provide advice on selecting the best model for your particular requirements.

Currently, many large-scale LLM applications are powered by Mistral models. Below is a quick summary of the many use scenarios that Mistral AI encounter, along with the corresponding Mistral model:

  1. Mistral Small powers simple tasks that can be completed in large quantities, such as text generation, customer support, and classification.
  2. Mistral 8x22B is used for intermediate jobs (data extraction, document summarization, email writing, job description writing, and product description writing) that call for moderate reasoning.
  3. Mistral enormous powers complex activities requiring high levels of specialisation or enormous thinking capacities (e.g., Synthetic Text Generation, Code Generation, RAG, or Agents).

Amazon Bedrock has added Mistral Small

Amazon Bedrock now offers the Mistral Small foundation model (FM) from Mistral AI on a broad basis. This is a quick follow-up to AWS previous announcements on Mistral Large in April, Mixtral 8x7B in March, and Mistral 7B in March. Further increasing the variety of models available, Mistral Small, Mistral Large, Mistral 7B, and Mixtral 8x7B are now available as high-performing models from Mistral AI on Amazon Bedrock.

The extremely effective large language model (LLM) Mistral Small was created by Mistral AI and is tailored for high-volume, low-latency language-based applications. Mistral Small is ideal for simple activities that can be completed in large quantities, like text production, customer support, and classification. It offers exceptional performance at a reasonable cost.

You should be aware of the following important Mistral Small features:

  • Mistral Small, a Retrieval-Augmented Generation (RAG) specialist, makes sure that crucial data is kept safe across extended context periods, up to 32K tokens.
  • Coding expertise: Mistral Small is highly skilled at writing, reviewing, and commenting code in a variety of popular coding languages.
  • Multilingualism: Mistral Small performs at an excellent level not only in English but also in French, German, Spanish, and Italian. Numerous additional languages are supported as well.

How to use Mistral Small

In order to begin using Mistral Small, you must first have access to the model. You can select Manage model access after selecting Model access in the Amazon Bedrock console. You can select Mistral Small after expanding the Mistral AI section, and then click Save changes.

You can now use Mistral Small in Amazon Bedrock since you have model access to it. You can view the current state by refreshing the Base models table.

This email can be accurately classified as “Spam” by Mistral 7B, Mixtral 8x7B, and Mistral Large. Like the larger models, Mistral Small can appropriately classify this as well. You can also attempt a number of related tasks and achieve success, such as creating a Bash script from a text prompt and creating a yoghurt recipe. Because of this, Mistral Small is the Mistral AI models in Amazon Bedrock that are most economical and effective for these kinds of activities.

Mistral Small is quite good at multilingual work. It performs better in French, German, Spanish, and Italian in addition to English. You can ask the model to provide me with two sentences on sustainability in order to gauge its proficiency with the German language:

Currently accessible

In the US East (North Virginia) region, Amazon Bedrock is now carrying the Mistral Small model.

Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.


Please enter your comment!
Please enter your name here

Recent Posts

Popular Post Would you like to receive notifications on latest updates? No Yes