inception

Inception: Mercury Coder

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/blog/introducing-mercury).

Input Cost
$0.25
per 1M tokens
Output Cost
$1.00
per 1M tokens
Context Window
128,000
tokens
Compare vs GPT-4o
Developer ID: inception/mercury-coder

Related Models

inception
$0.25/1M

Inception: Mercury

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discre...

📝 128,000 ctx Compare →
openai
$2.50/1M

OpenAI: GPT Audio

The gpt-audio model is OpenAI's first generally available audio model. The new snapshot fe...

📝 128,000 ctx Compare →
openai
$0.60/1M

OpenAI: GPT Audio Mini

A cost-efficient version of GPT Audio. The new snapshot features an upgraded decoder for m...

📝 128,000 ctx Compare →