meta-llama

Meta: Llama 3.2 11B Vision Instruct

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and...

Input Cost
$0.05
per 1M tokens
Output Cost
$0.05
per 1M tokens
Context Window
131,072
tokens
Compare vs GPT-4o
Developer ID: meta-llama/llama-3.2-11b-vision-instruct

Related Models

meta-llama
$0.02/1M

Llama Guard 3 8B

Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classifica...

📝 131,072 ctx Compare →
meta-llama
$0.15/1M

Meta: Llama 4 Maverick

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Met...

📝 1,048,576 ctx Compare →
meta-llama
Free/1M

Meta: Llama 3.2 3B Instruct (free)

Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for adv...

📝 131,072 ctx Compare →