mistralai

Mistral Tiny

Note: This model is being deprecated. Recommended replacement is the newer [Ministral 8B](/mistral/ministral-8b) This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than [Mistral 7B](/models/mistralai/mistral-7b-instruct-v0.1), inspired by community work. It's best used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial.

Input Cost
$0.25
per 1M tokens
Output Cost
$0.25
per 1M tokens
Context Window
32,768
tokens
Compare vs GPT-4o
Developer ID: mistralai/mistral-tiny

Related Models

mistralai
$0.02/1M

Mistral: Mistral Nemo

A 12B parameter model with a 128k token context length built by Mistral in collaboration w...

📝 131,072 ctx Compare →
mistralai
$0.20/1M

Mistral: Mistral 7B Instruct v0.2

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed an...

📝 32,768 ctx Compare →
mistralai
$0.11/1M

Mistral: Mistral 7B Instruct v0.1

A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations ...

📝 2,824 ctx Compare →