Vision Models
Google: Gemma 4 31B
Gemma 4 31B Instruct is Google DeepMind's 30.7B dense multimodal model supporting text and...
Z.ai: GLM 5V Turbo
GLM-5V-Turbo is Z.aiβs first native multimodal agent foundation model, built for vision-...
Reka Edge
Reka Edge is an extremely efficient 7B multimodal vision-language model that accepts image...
Xiaomi: MiMo-V2-Omni
MiMo-V2-Omni is a frontier omni-modal model that natively processes image, video, and audi...
OpenAI: GPT-5.4 Nano
GPT-5.4 nano is the most lightweight and cost-efficient variant of the GPT-5.4 family, opt...
OpenAI: GPT-5.4 Mini
GPT-5.4 mini brings the core capabilities of GPT-5.4 to a faster, more efficient model opt...
Qwen: Qwen3.5-9B
Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver s...
Google: Nano Banana 2 (Gemini 3.1 Flash Image Preview)
Gemini 3.1 Flash Image Preview, a.k.a. "Nano Banana 2," is Googleβs latest state of the ...
Qwen: Qwen3.5-35B-A3B
The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid archit...
Qwen: Qwen3.5-27B
The Qwen3.5 27B native vision-language Dense model incorporates a linear attention mechani...
Qwen: Qwen3.5-122B-A10B
The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that ...
Qwen: Qwen3.5-Flash
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that in...
Qwen: Qwen3.5 Plus 2026-02-15
The Qwen3.5 native vision-language series Plus models are built on a hybrid architecture t...
Qwen: Qwen3.5 397B A17B
The Qwen3.5 series 397B-A17B native vision-language model is built on a hybrid architectur...
Z.ai: GLM 4.6V
GLM-4.6V is a large multimodal model designed for high-fidelity visual understanding and l...
Amazon: Nova 2 Lite
Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can proc...
Mistral: Ministral 3 8B 2512
A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny l...
Mistral: Ministral 3 3B 2512
The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny...
Google: Nano Banana Pro (Gemini 3 Pro Image Preview)
Nano Banana Pro is Googleβs most advanced image-generation and editing model, built on G...
Qwen: Qwen3 VL 32B Instruct
Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-...
OpenAI: GPT-5 Image Mini
GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini]...
Qwen: Qwen3 VL 8B Instruct
Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built...
OpenAI: GPT-5 Image
[GPT-5](https://openrouter.ai/openai/gpt-5) Image combines OpenAI's GPT-5 model with state...
Google: Nano Banana (Gemini 2.5 Flash Image)
Gemini 2.5 Flash Image, a.k.a. "Nano Banana," is now generally available. It is a state of...
Qwen: Qwen3 VL 30B A3B Thinking
Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with v...
Qwen: Qwen3 VL 30B A3B Instruct
Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with v...
Qwen: Qwen3 VL 235B A22B Thinking
Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with...
Qwen: Qwen3 VL 235B A22B Instruct
Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text ge...
Baidu: ERNIE 4.5 VL 28B A3B
A powerful multimodal Mixture-of-Experts chat model featuring 28B total parameters with 3B...
Z.ai: GLM 4.5V
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on...
ByteDance: UI-TARS 7B
UI-TARS-1.5 is a multimodal vision-language agent optimized for GUI-based environments, in...
xAI: Grok 4
Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel to...
Baidu: ERNIE 4.5 VL 424B A47B
ERNIE-4.5-VL-424B-A47B is a multimodal Mixture-of-Experts (MoE) model from Baiduβs ERNIE...
Arcee AI: Spotlight
Spotlight is a 7βbillionβparameter visionβlanguage model derived from Qwen 2.5βVL ...
Qwen: Qwen2.5 VL 32B Instruct
Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement lear...
Google: Gemma 3 4B (free)
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It ha...
Google: Gemma 3 4B
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It ha...
Google: Gemma 3 12B (free)
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It ha...
Google: Gemma 3 12B
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It ha...
Google: Gemma 3 27B (free)
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It ha...
Google: Gemma 3 27B
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It ha...
Qwen: Qwen2.5 VL 72B Instruct
Qwen2.5-VL is proficient in recognizing common objects such as flowers, birds, fish, and i...
MiniMax: MiniMax-01
MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image u...
Amazon: Nova Lite 1.0
Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast ...
Mistral: Pixtral Large 2411
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral ...
Meta: Llama 3.2 11B Vision Instruct
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle ...
OpenAI: GPT-4o-mini (2024-07-18)
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting...
OpenAI: GPT-4o-mini
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting...
OpenAI: GPT-4o (2024-05-13)
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs...
OpenAI: GPT-4o
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs...
OpenAI: GPT-4o (extended)
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs...
OpenAI: GPT-4 Turbo
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mo...
OpenAI: GPT-4 Turbo (older v1106)
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mo...