SOTAVerified

A Deep Decomposable Model for Disentangling Syntax and Semantics in Sentence Representation

2021-11-01Findings (EMNLP) 2021Unverified0· sign in to hype

Dingcheng Li, Hongliang Fei, Shaogang Ren, Ping Li

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, disentanglement based on a generative adversarial network or a variational autoencoder has significantly advanced the performance of diverse applications in CV and NLP domains. Nevertheless, those models still work on coarse levels in the disentanglement of closely related properties, such as syntax and semantics in human languages. This paper introduces a deep decomposable model based on VAE to disentangle syntax and semantics by using total correlation penalties on KL divergences. Notably, we decompose the KL divergence term of the original VAE so that the generated latent variables can be separated in a more clear-cut and interpretable way. Experiments on benchmark datasets show that our proposed model can significantly improve the disentanglement quality between syntactic and semantic representations for semantic similarity tasks and syntactic similarity tasks.

Tasks

Reproductions