Technology
text-embedding-v3
This is the third-generation embedding model: superior performance, lower cost, and dynamic vector sizing via Matryoshka Representation Learning (MRL).
The text-embedding-v3 family, featuring `text-embedding-3-small` and `text-embedding-3-large`, delivers a significant performance leap over its predecessor, Ada-002. The large model, for example, hits a 54.9% average on the multilingual MIRACL benchmark (up from 31.4%) and a 64.6% on the MTEB English benchmark. Key to its efficiency is the new `dimensions` parameter: you can now shorten the vector size—the 256-dimensional large model outperforms the 1536-dimensional Ada-002—cutting storage and latency costs dramatically.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1