SOTAVerified

Similarity-Based Reconstruction Loss for Meaning Representation

2018-10-01EMNLP 2018Unverified0· sign in to hype

Olga Kovaleva, Anna Rumshisky, Alexey Romanov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper addresses the problem of representation learning. Using an autoencoder framework, we propose and evaluate several loss functions that can be used as an alternative to the commonly used cross-entropy reconstruction loss. The proposed loss functions use similarities between words in the embedding space, and can be used to train any neural model for text generation. We show that the introduced loss functions amplify semantic diversity of reconstructed sentences, while preserving the original meaning of the input. We test the derived autoencoder-generated representations on paraphrase detection and language inference tasks and demonstrate performance improvement compared to the traditional cross-entropy loss.

Tasks

Reproductions