SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 1057610600 of 10817 papers

TitleStatusHype
Who’s on First?: Probing the Learning and Representation Capabilities of Language Models on Deterministic Closed DomainsCode0
Table Question Answering for Low-resourced Indic LanguagesCode0
SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative ExamplesCode0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
Uncertainty Guided Global Memory Improves Multi-Hop Question AnsweringCode0
Table2answer: Read the database and answer without SQLCode0
Self-Training Meets Consistency: Improving LLMs' Reasoning With Consistency-Driven Rationale EvaluationCode0
SlotPi: Physics-informed Object-centric Reasoning ModelsCode0
Topic Transferable Table Question AnsweringCode0
Uncertainty Quantification in Retrieval Augmented Question AnsweringCode0
WARP-Text: a Web-Based Tool for Annotating Relationships between Pairs of TextsCode0
Zero-shot Commonsense Question Answering with Cloze Translation and Consistency OptimizationCode0
Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language ModelsCode0
WXImpactBench: A Disruptive Weather Impact Understanding Benchmark for Evaluating Large Language ModelsCode0
Zero-shot Commonsense Reasoning over Machine ImaginationCode0
Self Supervision for Attention NetworksCode0
Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded DialogCode0
Uncovering Hidden Semantics of Set Information in Knowledge BasesCode0
T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image EvaluationCode0
Uncovering the Full Potential of Visual Grounding Methods in VQACode0
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of MindCode0
Systematic Inequalities in Language Technology Performance across the World’s LanguagesCode0
Zero-Shot Complex Question-Answering on Long Scientific DocumentsCode0
Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation PuzzlesCode0
Understanding Attention for Vision-and-Language TasksCode0
Show:102550
← PrevPage 424 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified