SOTAVerified

Semi-supervised Parsing with a Variational Autoencoding Parser

2020-07-01WS 2020Unverified0· sign in to hype

Xiao Zhang, Dan Goldwasser

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose an end-to-end variational autoencoding parsing (VAP) model for semi-supervised graph-based projective dependency parsing. It encodes the input using continuous latent variables in a sequential manner by deep neural networks (DNN) that can utilize the contextual information, and reconstruct the input using a generative model. The VAP model admits a unified structure with different loss functions for labeled and unlabeled data with shared parameters. We conducted experiments on the WSJ data sets, showing the proposed model can use the unlabeled data to increase the performance on a limited amount of labeled data, on a par with a recently proposed semi-supervised parser with faster inference.

Tasks

Reproductions