SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 471480 of 10817 papers

TitleStatusHype
Contextual Object Detection with Multimodal Large Language ModelsCode2
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical TasksCode2
Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language ModelsCode2
NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving ScenarioCode2
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-TuningCode2
LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future OpportunitiesCode2
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool EmbeddingsCode2
Pengi: An Audio Language Model for Audio TasksCode2
StructGPT: A General Framework for Large Language Model to Reason over Structured DataCode2
OCRBench: On the Hidden Mystery of OCR in Large Multimodal ModelsCode2
Show:102550
← PrevPage 48 of 1082Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified