← Back to models

LFM2

Liquid AI (USA) July 10 2025

Parameters

350M, 700M, 1.2B, 2.6B (4 dense) + 8B-A1B MoE (8.3B total / 1.5B active)

License

Apache 2.0-based (free for <$10M revenue)

Key Features

Hybrid architecture (10 double-gated short-range convolution blocks + 6 GQA blocks); 3× faster training than previous LFM generation; 2× faster decode/prefill on CPU vs Qwen3; edge/on-device deployment focus (smartphones, laptops, vehicles); outperforms Qwen3, Gemma 3, Phi-4-Mini in size classes; pre-trained on 10-12T tokens with 32K-context mid-training; supports creative writing, agentic tasks, data extraction, RAG, multi-turn conversations; 8 languages (English, Arabic, Chinese, French, German, Japanese, Korean, Spanish); first US company to beat Chinese small models on efficiency/quality; runs efficiently on CPU, GPU, NPU hardware.

Paper / Source

https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models