24B (dense)
Apache 2.0
Efficient base model for low-latency tasks; outperforms Llama 3.3 70B in internal evals; ideal for fine-tuning in automation/agent workflows; no RL/synthetic data used.
https://mistral.ai/news/mistral-small-3