mistralai

Mistral: Mistral 7B Instruct v0.2

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes: - 32k context window (vs 8k context in v0.1) - Rope-theta = 1e6 - No Sliding-Window Attention

Input Cost
$0.20
per 1M tokens
Output Cost
$0.20
per 1M tokens
Context Window
32,768
tokens
Compare vs GPT-4o
Developer ID: mistralai/mistral-7b-instruct-v0.2

Related Models

mistralai
$0.04/1M

Mistral: Ministral 3B

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels...

📝 131,072 ctx Compare →
mistralai
$0.10/1M

Mistral: Voxtral Small 24B 2507

Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio i...

📝 32,000 ctx Compare →
mistralai
$0.11/1M

Mistral: Mistral 7B Instruct v0.1

A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations ...

📝 2,824 ctx Compare →