mistralai

Mistral: Mixtral 8x22B Instruct

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe

Input Cost
$2.00
per 1M tokens
Output Cost
$6.00
per 1M tokens
Context Window
65,536
tokens
Compare vs GPT-4o
Developer ID: mistralai/mixtral-8x22b-instruct

Related Models

mistralai
$0.10/1M

Mistral: Devstral Small 1.1

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering ...

📝 131,072 ctx Compare →
mistralai
$0.54/1M

Mistral: Mixtral 8x7B Instruct

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI,...

📝 32,768 ctx Compare →
mistralai
$0.40/1M

Mistral: Mistral Medium 3.1

Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance ...

📝 131,072 ctx Compare →