SOTAVerified

Efficient Sentence Embedding using Discrete Cosine Transform

2019-09-06IJCNLP 2019Code Available0· sign in to hype

Nada Almarwani, Hanan Aldarmaki, Mona Diab

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vector averaging remains one of the most popular sentence embedding methods in spite of its obvious disregard for syntactic structure. While more complex sequential or convolutional networks potentially yield superior classification performance, the improvements in classification accuracy are typically mediocre compared to the simple vector averaging. As an efficient alternative, we propose the use of discrete cosine transform (DCT) to compress word sequences in an order-preserving manner. The lower order DCT coefficients represent the overall feature patterns in sentences, which results in suitable embeddings for tasks that could benefit from syntactic features. Our results in semantic probing tasks demonstrate that DCT embeddings indeed preserve more syntactic information compared with vector averaging. With practically equivalent complexity, the model yields better overall performance in downstream classification tasks that correlate with syntactic features, which illustrates the capacity of DCT to preserve word order information.

Tasks

Reproductions