Technology
BGE-small-en-v1
A high-efficiency 33.5 million parameter embedding model that dominates the MTEB leaderboard for English retrieval tasks.
Engineered by the Beijing Academy of Artificial Intelligence (BAAI), BGE-small-en-v1.5 delivers elite performance in a compact 133MB footprint. It processes 512-token sequences into 384-dimensional vectors, balancing low latency with high accuracy for RAG pipelines and semantic search. This model currently maintains top-tier rankings on the Massive Text Embedding Benchmark (MTEB) by outperforming significantly larger architectures in retrieval, reranking, and clustering metrics.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1