SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 61266150 of 10817 papers

TitleStatusHype
Measuring Compositional Consistency for Video Question Answering0
HRCA+: Advanced Multiple-choice Machine Reading Comprehension Method0
HPI Question Answering System in BioASQ 20160
Measuring Domain Portability and ErrorPropagation in Biomedical QA0
Biomedical Question Answering via Weighted Neural Network Passage Retrieval0
A Survey on Table Question Answering: Recent Advances0
How You Ask Matters: The Effect of Paraphrastic Questions to BERT Performance on a Clinical SQuAD Dataset0
Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction0
Measuring Popularity of Machine-Generated Sentences Using Term Count, Document Frequency, and Dependency Language Model0
Mitigating Bias for Question Answering Models by Tracking Bias Influence0
Measuring Retrieval Complexity in Question Answering Systems0
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
Measuring Sentences Similarity: A Survey0
Mitigating Large Language Model Hallucination with Faithful Finetuning0
Measuring the Limit of Semantic Divergence for English Tweets.0
MEBench: Benchmarking Large Language Models for Cross-Document Multi-Entity Question Answering0
Mitigating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop Question Answering0
Continuous Training and Fine-tuning for Domain-Specific Language Models in Medical Question Answering0
How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation0
A Survey on Table-and-Text HybridQA: Concepts, Methods, Challenges and Future Directions0
How Well can We Learn Interpretable Entity Types from Text?0
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey0
Echo-Attention: Attend Once and Get N Attentions for Free0
How Transferable are Reasoning Patterns in VQA?0
Show:102550
← PrevPage 246 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified