Whisper-Small (OpenAI Projects .

Technology

Whisper-Small (OpenAI

A 244-million parameter speech recognition model from OpenAI that balances inference speed with high-accuracy multilingual transcription.

Whisper-Small is a 244-million parameter transformer model optimized for robust speech-to-text and translation. OpenAI trained this architecture on 680,000 hours of diverse audio data to ensure it handles 98 languages and heavy background noise with ease. It operates at 6x the speed of the Large model and requires just 2 GB of VRAM (a practical choice for local deployment). This model excels at converting speech into text or translating non-English audio directly into English without the need for extensive fine-tuning.

https://huggingface.co/openai/whisper-small
0 projects · 0 cities

Recent Talks & Demos

Showing 1-0 of 0

Members-Only

Sign in to see who built these projects

No public projects found for this technology yet.