SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 55265550 of 10817 papers

TitleStatusHype
Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance0
A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents0
Dolphin: A Challenging and Diverse Benchmark for Arabic NLG0
Learning Answer Generation using Supervision from Automatic Question Answering Evaluators0
Allies: Prompting Large Language Model with Beam Search0
Peek Across: Improving Multi-Document Modeling via Cross-Document Question-AnsweringCode0
Mitigating Temporal Misalignment by Discarding Outdated FactsCode0
Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning LogicCode0
Is Summary Useful or Not? An Extrinsic Human Evaluation of Text Summaries on Downstream Tasks0
Revisiting Sentence Union Generation as a Testbed for Text ConsolidationCode0
SAIL: Search-Augmented Instruction Learning0
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction0
The Role of Output Vocabulary in T2T LMs for SPARQL Semantic ParsingCode0
Getting MoRE out of Mixture of Language Model Reasoning Experts0
Selectively Answering Ambiguous Questions0
Context-Aware Transformer Pre-Training for Answer Sentence Selection0
TACR: A Table-alignment-based Cell-selection and Reasoning Model for Hybrid Question-Answering0
GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions0
Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference ResolutionCode0
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of MindCode0
Few-shot Unified Question Answering: Tuning Models or Prompts?0
Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited QuestionsCode0
Evaluating and Modeling Attribution for Cross-Lingual Question Answering0
Few-Shot Data Synthesis for Open Domain Multi-Hop Question Answering0
Show:102550
← PrevPage 222 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified