meta-llama

Meta: Llama 3.2 11B Vision Instruct

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Input Cost
$0.05
per 1M tokens
Output Cost
$0.05
per 1M tokens
Context Window
131,072
tokens
Compare vs GPT-4o
Developer ID: meta-llama/llama-3.2-11b-vision-instruct

Related Models

meta-llama
$0.10/1M

Meta: Llama 3.3 70B Instruct

The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction...

📝 131,072 ctx Compare →
meta-llama
$3.50/1M

Meta: Llama 3.1 405B Instruct

The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impr...

📝 10,000 ctx Compare →
meta-llama
$0.08/1M

Meta: Llama 4 Scout

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by...

📝 327,680 ctx Compare →