mistralai

Mistral: Mixtral 8x7B Instruct

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

Input Cost
$0.54
per 1M tokens
Output Cost
$0.54
per 1M tokens
Context Window
32,768
tokens
Compare vs GPT-4o
Developer ID: mistralai/mixtral-8x7b-instruct

Related Models

mistralai
$0.10/1M

Mistral: Devstral Small 1.1

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering ...

📝 131,072 ctx Compare →
mistralai
$0.10/1M

Mistral: Ministral 3 3B 2512

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny...

📝 131,072 ctx Compare →
mistralai
Free/1M

Mistral: Devstral 2 2512 (free)

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic c...

📝 262,144 ctx Compare →