SOTAVerified

Neural Embeddings for Text

2022-08-17Code Available0· sign in to hype

Oleg Vasilyev, John Bohannon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a new kind of embedding for natural language text that deeply represents semantic meaning. Standard text embeddings use the outputs from hidden layers of a pretrained language model. In our method, we let a language model learn from the text and then literally pick its brain, taking the actual weights of the model's neurons to generate a vector. We call this representation of the text a neural embedding. We confirm the ability of this representation to reflect semantics of the text by an analysis of its behavior on several datasets, and by a comparison of neural embedding with state of the art sentence embeddings.

Tasks

Reproductions