SOTAVerified

Piccolo2: General Text Embedding with Multi-task Hybrid Loss Training

2024-05-11Code Available2· sign in to hype

Junqin Huang, Zhongjie Hu, Zihao Jing, Mengya Gao, Yichao Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this report, we introduce Piccolo2, an embedding model that surpasses other models in the comprehensive evaluation over 6 tasks on CMTEB benchmark, setting a new state-of-the-art. Piccolo2 primarily leverages an efficient multi-task hybrid loss training approach, effectively harnessing textual data and labels from diverse downstream tasks. In addition, Piccolo2 scales up the embedding dimension and uses MRL training to support more flexible vector dimensions. The latest information of piccolo models can be accessed via: https://huggingface.co/sensenova/

Reproductions