moonshotai

MoonshotAI: Kimi K2 Thinking

Kimi K2 Thinking is Moonshot AI’s most advanced open reasoning model to date, extending the K2 series into agentic, long-horizon reasoning. Built on the trillion-parameter Mixture-of-Experts (MoE) architecture introduced in Kimi K2, it activates 32 billion parameters per forward pass and supports 256 k-token context windows. The model is optimized for persistent step-by-step thought, dynamic tool invocation, and complex reasoning workflows that span hundreds of turns. It interleaves step-by-step reasoning with tool use, enabling autonomous research, coding, and writing that can persist for hundreds of sequential actions without drift. It sets new open-source benchmarks on HLE, BrowseComp, SWE-Multilingual, and LiveCodeBench, while maintaining stable multi-agent behavior through 200–300 tool calls. Built on a large-scale MoE architecture with MuonClip optimization, it combines strong reasoning depth with high inference efficiency for demanding agentic and analytical tasks.

Input Cost
$0.40
per 1M tokens
Output Cost
$1.75
per 1M tokens
Context Window
262,144
tokens
Compare vs GPT-4o
Developer ID: moonshotai/kimi-k2-thinking

Related Models

moonshotai
$0.60/1M

MoonshotAI: Kimi K2 0905 (exacto)

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-...

📝 262,144 ctx Compare →
moonshotai
$0.39/1M

MoonshotAI: Kimi K2 0905

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-...

📝 262,144 ctx Compare →
moonshotai
Free/1M

MoonshotAI: Kimi K2 0711 (free)

Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moo...

📝 32,768 ctx Compare →