Cogito V2 Preview Llama 109B
An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended βthinkingβ phase, with alignment guided by Iterated Distillation & Amplification (IDA). It targets coding, STEM, instruction following, and general helpfulness, with stronger multilingual, tool-calling, and reasoning performance than size-equivalent baselines. The model supports long-context use (up to 10M tokens) and standard Transformers workflows. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
Related Models
Deep Cogito: Cogito v2.1 671B
Cogito v2.1 671B MoE represents one of the strongest open models globally, matching perfor...
Deep Cogito: Cogito V2 Preview Llama 405B
Cogito v2 405B is a dense hybrid reasoning model that combines direct answering capabiliti...
Deep Cogito: Cogito V2 Preview Llama 70B
Cogito v2 70B is a dense hybrid reasoning model that combines direct answering capabilitie...