mistralai

Mistral: Mixtral 8x22B Instruct

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding,...

Input Cost
$2.00
per 1M tokens
Output Cost
$6.00
per 1M tokens
Context Window
65,536
tokens
Compare vs GPT-4o
Developer ID: mistralai/mixtral-8x22b-instruct

Related Models

mistralai
$0.05/1M

Mistral: Mistral Small 3

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance ac...

📝 32,768 ctx Compare →
mistralai
$0.54/1M

Mistral: Mixtral 8x7B Instruct

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI,...

📝 32,768 ctx Compare →
mistralai
$2.00/1M

Mistral Large 2407

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a ...

📝 131,072 ctx Compare →