SOTAVerified

PhraseTransformer: Self-Attention using Local Context for Semantic Parsing

2021-01-01Code Available0· sign in to hype

Phuong Minh Nguyen, Vu Tran, Minh Le Nguyen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Semantic parsing is a challenging task whose purpose is to convert a natural language utterance to machine-understandable information representation. Recently, solutions using Neural Machine Translation have achieved many promising results, especially Transformer because of the ability to learn long-range word dependencies. However, the one drawback of adapting the original Transformer to the semantic parsing is the lack of detail in expressing the information of sentences. Therefore, this work proposes a PhraseTransformer architecture that is capable of a more detailed meaning representation by learning the phrase dependencies in the sentence. The main idea is to incorporate Long Short-Term Memory (LSTM) into the Self-Attention mechanism of the original Transformer to capture more local context of phrases. Experimental results show that the proposed model captures the detailed meaning better than Transformer, raises local context awareness and achieves strong competitive performance on Geo, MSParS datasets, and leads to new state-of-the-art (SOTA) performance on Atis dataset.

Tasks

Reproductions