SOTAVerified

TMU-NLP System Using BERT-based Pre-trained Model to the NLP-TEA CGED Shared Task 2020

2020-12-01AACL (NLP-TEA) 2020Unverified0· sign in to hype

Hongfei Wang, Mamoru Komachi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we introduce our system for NLPTEA 2020 shared task of Chinese Grammatical Error Diagnosis (CGED). In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we treat the grammar error diagnosis (GED) task as a grammatical error correction (GEC) problem and propose a method that incorporates a pre-trained model into an encoder-decoder model to solve this problem.

Tasks

Reproductions