SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 32813290 of 10817 papers

TitleStatusHype
Benchmarking Large Language Models in Complex Question Answering Attribution using Knowledge Graphs0
LongHealth: A Question Answering Benchmark with Long Clinical DocumentsCode1
Towards Consistent Natural-Language Explanations via Explanation-Consistency FinetuningCode0
Genie: Achieving Human Parity in Content-Grounded Datasets Generation0
Question answering systems for health professionals at the point of care -- a systematic review0
IICONGRAPH: improved Iconographic and Iconological Statements in Knowledge Graphs0
SEER: Facilitating Structured Reasoning and Explanation via Reinforcement LearningCode1
CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering0
SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering0
InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with InstructionsCode2
Show:102550
← PrevPage 329 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified