SOTAVerified

Multi-level Alignment Pretraining for Multi-lingual Semantic Parsing

2020-12-01COLING 2020Unverified0· sign in to hype

Bo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, Xiaola Lin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we present a multi-level alignment pretraining method in a unified architecture formulti-lingual semantic parsing. In this architecture, we use an adversarial training method toalign the space of different languages and use sentence level and word level parallel corpus assupervision information to align the semantic of different languages. Finally, we jointly train themulti-level alignment and semantic parsing tasks. We conduct experiments on a publicly avail-able multi-lingual semantic parsing dataset ATIS and a newly constructed dataset. Experimentalresults show that our model outperforms state-of-the-art methods on both datasets.

Tasks

Reproductions