deepseek

DeepSeek: R1 Distill Qwen 32B

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Input Cost
$0.29
per 1M tokens
Output Cost
$0.29
per 1M tokens
Context Window
32,768
tokens
Compare vs GPT-4o
Developer ID: deepseek/deepseek-r1-distill-qwen-32b

Related Models

deepseek
$0.21/1M

DeepSeek: DeepSeek V3.1 Terminus

DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that ...

📝 163,840 ctx Compare →
deepseek
$0.25/1M

DeepSeek: DeepSeek V3.2

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficienc...

📝 163,840 ctx Compare →
deepseek
$0.27/1M

DeepSeek: DeepSeek V3.2 Speciale

DeepSeek-V3.2-Speciale is a high-compute variant of DeepSeek-V3.2 optimized for maximum re...

📝 163,840 ctx Compare →