xiaomi

Xiaomi: MiMo-V2-Flash

MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).

Input Cost
$0.09
per 1M tokens
Output Cost
$0.29
per 1M tokens
Context Window
262,144
tokens
Compare vs GPT-4o
Developer ID: xiaomi/mimo-v2-flash

Related Models

liquid
Free/1M

LiquidAI: LFM2.5-1.2B-Thinking (free)

LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks,...

📝 32,768 ctx Compare →
openai
$1.75/1M

OpenAI: GPT-5.2-Codex

GPT-5.2-Codex is an upgraded version of GPT-5.1-Codex optimized for software engineering a...

📝 400,000 ctx Compare →
allenai
$0.20/1M

AllenAI: Olmo 3.1 32B Instruct

Olmo 3.1 32B Instruct is a large-scale, 32-billion-parameter instruction-tuned language mo...

📝 65,536 ctx Compare →