Technology
SmolLM2 1
A compact 1.7B parameter model optimized for local execution via 11 trillion tokens of high-quality training.
Hugging Face built SmolLM2 1.7B to deliver desktop-grade reasoning to edge devices. It outperforms MobileLLM 1.5B on MMLU and HumanEval benchmarks by leveraging a massive 11-trillion-token dataset. The model runs locally on standard hardware (smartphones and laptops) to provide low-latency inference. It is a reliable choice for developers building privacy-first applications: fast, efficient, and independent of cloud APIs.
Recent Talks & Demos
Showing 1-0 of 0