SOTAVerified

MixQA: Embedding and Answer Mixing for Question Answering

2021-05-27Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

When we attempt to search for answers to our questions in search engines, we often attempt to rephrase our questions if the answers are not what we expect. Therefore, we attempt to replicate this thought process with our method of mixing embeddings of questions and answer ensembling, which we call MixQA. We experiment on the Squad1.1 and Squad2.0 datasets using BERT. We investigate a variety of different questions to mix with the given test-time question, including similar training-time questions, randomly chosen training-time questions, and paraphrases of the given question. We find that all our methods increase performance over the baselines with no mixing, and do indeed take advantage of the information from the mixed questions. Our best performing method on both datasets involves using ROUGE-1 retrieved similar training-time questions for mixing, improving performance on Squad1.1 and Squad2.0.

Tasks

Reproductions