SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 41914200 of 10817 papers

TitleStatusHype
Benchmarks for Pirá 2.0, a Reading Comprehension Dataset about the Ocean, the Brazilian Coast, and Climate Change0
Diverse Multi-Answer Retrieval with Determinantal Point Processes0
An Intelligent Question Answering System based on Power Knowledge Graph0
A Comprehensive Survey of Knowledge-Based Vision Question Answering Systems: The Lifecycle of Knowledge in Visual Reasoning Task0
Diverse and Non-redundant Answer Set Extraction on Community QA based on DPPs0
Benchmarking Vision Language Models for Cultural Understanding0
Ditch the Gold Standard: Re-evaluating Conversational Question Answering0
Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection0
An Initial Investigation of Non-Native Spoken Question-Answering0
Aesthetic Visual Question Answering of Photographs0
Show:102550
← PrevPage 420 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified