← Back to models

OLMo 3

Allen Institute for AI (USA) November 20 2025

Parameters

7B / 32B (multiple variants: Base, Think, Instruct, RL Zero; dense)

License

Apache 2.0

Key Features

Fully open model family trained on Dolma 3 (6T tokens); 65K context; Base for foundation tasks; Think for explicit reasoning (matches Qwen 3 on MATH); Instruct for chat/tool use; RL Zero for research; competitive with Qwen 2.5/Gemma 3; complete transparency from data to deployment; first fully open 32B thinking model.

Paper / Source

https://allenai.org/papers/olmo3