meituan

Meituan: LongCat Flash Chat

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.

Input Cost
$0.20
per 1M tokens
Output Cost
$0.80
per 1M tokens
Context Window
131,072
tokens
Compare vs GPT-4o
Developer ID: meituan/longcat-flash-chat

Related Models

liquid
Free/1M

LiquidAI: LFM2.5-1.2B-Thinking (free)

LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks,...

📝 32,768 ctx Compare →
openai
$1.75/1M

OpenAI: GPT-5.2-Codex

GPT-5.2-Codex is an upgraded version of GPT-5.1-Codex optimized for software engineering a...

📝 400,000 ctx Compare →
allenai
$0.20/1M

AllenAI: Olmo 3.1 32B Instruct

Olmo 3.1 32B Instruct is a large-scale, 32-billion-parameter instruction-tuned language mo...

📝 65,536 ctx Compare →