SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 37313740 of 10817 papers

TitleStatusHype
Improving Complex Knowledge Base Question Answering via Question-to-Action and Question-to-Question AlignmentCode0
TallyQA: Answering Complex Counting QuestionsCode0
If the Sources Could Talk: Evaluating Large Language Models for Research Assistance in HistoryCode0
TAPAS: Weakly Supervised Table Parsing via Pre-trainingCode0
Addressing Overprescribing Challenges: Fine-Tuning Large Language Models for Medication Recommendation TasksCode0
Idiom Paraphrases: Seventh Heaven vs Cloud NineCode0
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question AnsweringCode0
Continual VQA for Disaster Response SystemsCode0
Identifying relevant common sense information in knowledge graphsCode0
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete ReasoningCode0
Show:102550
← PrevPage 374 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified