SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 21312140 of 10817 papers

TitleStatusHype
Shared Imagination: LLMs Hallucinate Alike0
Do LLMs Know When to NOT Answer? Investigating Abstention Abilities of Large Language Models0
Imperfect Vision Encoders: Efficient and Robust Tuning for Vision-Language Models0
KaPQA: Knowledge-Augmented Product Question-Answering0
Enhancing Temporal Understanding in LLMs for Semi-structured Tables0
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models0
MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive DiversityCode2
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language UnderstandingCode2
RadioRAG: Factual large language models for enhanced diagnostics in radiology using online retrieval augmented generationCode0
OMoS-QA: A Dataset for Cross-Lingual Extractive Question Answering in a German Migration Context0
Show:102550
← PrevPage 214 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified