mistralai

Mistral: Mistral Small 3

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/)

Input Cost
$0.03
per 1M tokens
Output Cost
$0.11
per 1M tokens
Context Window
32,768
tokens
Compare vs GPT-4o
Developer ID: mistralai/mistral-small-24b-instruct-2501

Related Models

mistralai
$0.40/1M

Mistral: Devstral Medium

Devstral Medium is a high-performance code generation and agentic reasoning model develope...

📝 131,072 ctx Compare →
mistralai
Free/1M

Mistral: Mistral Small 3.1 24B (free)

Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring...

📝 128,000 ctx Compare →
mistralai
$2.00/1M

Mistral Large 2407

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a ...

📝 131,072 ctx Compare →