7B / 32B (multiple variants: Base, Think, Instruct, RL Zero; dense)
Apache 2.0
Fully open model family trained on Dolma 3 (6T tokens); 65K context; Base for foundation tasks; Think for explicit reasoning (matches Qwen 3 on MATH); Instruct for chat/tool use; RL Zero for research; competitive with Qwen 2.5/Gemma 3; complete transparency from data to deployment; first fully open 32B thinking model.