SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 43764400 of 10817 papers

TitleStatusHype
NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving ScenarioCode2
Getting MoRE out of Mixture of Language Model Reasoning Experts0
Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance0
SAIL: Search-Augmented Instruction Learning0
Peek Across: Improving Multi-Document Modeling via Cross-Document Question-AnsweringCode0
A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents0
The Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language ModelsCode1
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction0
Dolphin: A Challenging and Diverse Benchmark for Arabic NLG0
MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop QuestionsCode1
Allies: Prompting Large Language Model with Beam Search0
Measuring Faithful and Plausible Visual Grounding in VQACode0
C-STS: Conditional Semantic Textual SimilarityCode1
Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language ModelsCode1
The Role of Output Vocabulary in T2T LMs for SPARQL Semantic ParsingCode0
Context-Aware Transformer Pre-Training for Answer Sentence Selection0
Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning LogicCode0
Learning Answer Generation using Supervision from Automatic Question Answering Evaluators0
GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions0
Using Natural Language Explanations to Rescale Human JudgmentsCode0
Meta-Learning Online Adaptation of Language ModelsCode1
Mitigating Temporal Misalignment by Discarding Outdated FactsCode0
Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference ResolutionCode0
Selectively Answering Ambiguous Questions0
Show:102550
← PrevPage 176 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified