SmolLM2 1 Projects .

Technology

SmolLM2 1

A compact 1.7B parameter model optimized for local execution via 11 trillion tokens of high-quality training.

Hugging Face built SmolLM2 1.7B to deliver desktop-grade reasoning to edge devices. It outperforms MobileLLM 1.5B on MMLU and HumanEval benchmarks by leveraging a massive 11-trillion-token dataset. The model runs locally on standard hardware (smartphones and laptops) to provide low-latency inference. It is a reliable choice for developers building privacy-first applications: fast, efficient, and independent of cloud APIs.

https://huggingface.co/huggingface/SmolLM2-1.7B
0 projects · 0 cities

Recent Talks & Demos

Showing 1-0 of 0

Members-Only

Sign in to see who built these projects

No public projects found for this technology yet.