meta-llama

Meta: Llama 4 Maverick

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Input Cost
$0.15
per 1M tokens
Output Cost
$0.60
per 1M tokens
Context Window
1,048,576
tokens
Compare vs GPT-4o
Developer ID: meta-llama/llama-4-maverick

Related Models

meta-llama
$0.02/1M

Meta: Llama 3.2 3B Instruct

Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for adv...

📝 131,072 ctx Compare →
meta-llama
$3.50/1M

Meta: Llama 3.1 405B Instruct

The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impr...

📝 10,000 ctx Compare →
meta-llama
$0.03/1M

Meta: Llama 3 8B Instruct

Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B...

📝 8,192 ctx Compare →