Advancing Seq2seq with Joint Paraphrase Learning
2020-11-01EMNLP (ClinicalNLP) 2020Unverified0· sign in to hype
So Yeon Min, Preethi Raghavan, Peter Szolovits
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We address the problem of model generalization for sequence to sequence (seq2seq) architectures. We propose going beyond data augmentation via paraphrase-optimized multi-task learning and observe that it is useful in correctly handling unseen sentential paraphrases as inputs. Our models greatly outperform SOTA seq2seq models for semantic parsing on diverse domains (Overnight - up to 3.2% and emrQA - 7%) and Nematus, the winning solution for WMT 2017, for Czech to English translation (CzENG 1.6 - 1.5 BLEU).