liquid

LiquidAI: LFM2-8B-A1B

LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.

Input Cost
$0.01
per 1M tokens
Output Cost
$0.02
per 1M tokens
Context Window
32,768
tokens
Compare vs GPT-4o
Developer ID: liquid/lfm2-8b-a1b

Related Models

liquid
Free/1M

LiquidAI: LFM2.5-1.2B-Thinking (free)

LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks,...

📝 32,768 ctx Compare →
liquid
Free/1M

LiquidAI: LFM2.5-1.2B-Instruct (free)

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast...

📝 32,768 ctx Compare →
liquid
$0.01/1M

LiquidAI: LFM2-2.6B

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed fo...

📝 32,768 ctx Compare →