mistralai

Mistral: Ministral 8B

Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k context length and excels in knowledge and reasoning tasks. It outperforms peers in the sub-10B category, making it perfect for low-latency, privacy-first applications.

Input Cost
$0.10
per 1M tokens
Output Cost
$0.10
per 1M tokens
Context Window
131,072
tokens
Compare vs GPT-4o
Developer ID: mistralai/ministral-8b

Related Models

mistralai
$2.00/1M

Mistral Large

This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's ...

📝 128,000 ctx Compare →
mistralai
$0.20/1M

Mistral: Mistral 7B Instruct v0.2

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed an...

📝 32,768 ctx Compare →
mistralai
$0.05/1M

Mistral: Devstral 2 2512

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic c...

📝 262,144 ctx Compare →