SOTAVerified

A BERT Baseline for the Natural Questions

2019-01-24Code Available0· sign in to hype

Chris Alberti, Kenton Lee, Michael Collins

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions. Code, preprocessed data and pretrained model are available at https://github.com/google-research/language/tree/master/language/question_answering/bert_joint.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Natural Questions (long)BERTjointF164.7Unverified

Reproductions