China)
MIT
1T (MoE),World's first open-source 1T-param reasoning model; pretrained on 20T tokens, tuned with RLVR/IcePop for stable multi-step thinking; tops AIME 2025 (92.6), CodeForces, ARC-AGI; solved IMO 2025 Problem 3 in one shot via AWorld agents; hybrid MoE from Ling 2.0 lineage.