SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 521530 of 10817 papers

TitleStatusHype
Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph CompletionCode2
PaLM: Scaling Language Modeling with PathwaysCode2
LinkBERT: Pretraining Language Models with Document LinksCode2
STaR: Bootstrapping Reasoning With ReasoningCode2
MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question AnsweringCode2
ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical ReasoningCode2
All in One: Exploring Unified Video-Language Pre-trainingCode2
ScienceWorld: Is your Agent Smarter than a 5th Grader?Code2
QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training QuantizationCode2
UnifiedQA-v2: Stronger Generalization via Broader Cross-Format TrainingCode2
Show:102550
← PrevPage 53 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified