SentEval: An Evaluation Toolkit for Universal Sentence Representations
2018-03-14LREC 2018Code Available1· sign in to hype
Alexis Conneau, Douwe Kiela
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/SentEvalOfficialIn paperpytorch★ 0
- github.com/facebookresearch/InferSentpytorch★ 2,280
- github.com/princeton-nlp/mabelpytorch★ 38
- github.com/ganeshjawahar/interpret_bertpytorch★ 0
- github.com/HUSTLyn/SentEvalpytorch★ 0
- github.com/gaotianyu1350/SentEvalpytorch★ 0
- github.com/sidak/SentEvalpytorch★ 0
- github.com/applicaai/sentevalpytorch★ 0
- github.com/AmanDaVinci/Universal-Sentence-Representationspytorch★ 0
- github.com/goel96vibhor/AdvSentEvalpytorch★ 0
Abstract
We introduce SentEval, a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. The aim is to provide a fairer, less cumbersome and more centralized way for evaluating sentence representations.