SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 60016025 of 10817 papers

TitleStatusHype
M3SciQA: A Multi-Modal Multi-Document Scientific QA Benchmark for Evaluating Foundation Models0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
Hyperbolic Attention Networks0
Dual Embeddings and Metrics for Relational Similarity0
HyPA-RAG: A Hybrid Parameter Adaptive Retrieval-Augmented Generation System for AI Legal and Policy Applications0
Machine Comprehension with Discourse Relations0
Machine Comprehension with Syntax, Frames, and Semantics0
Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases0
Controlled Natural Languages and Default Reasoning0
Machine Reading Comprehension: Generative or Extractive Reader?0
Machine Translation Evaluation for Arabic using Morphologically-enriched Embeddings0
Machine Translation Evaluation Meets Community Question Answering0
Hybrid-SQuAD: Hybrid Scholarly Question Answering Dataset0
Macquarie University at BioASQ 5b -- Query-based Summarisation Techniques for Selecting the Ideal Answers0
Hybrid Question Answering over Knowledge Base and Free Text0
Hybrid Graphs for Table-and-Text based Question Answering using LLMs0
DUAL: Textless Spoken Question Answering with Speech Discrete Unit Adaptive Learning0
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning0
Contri(e)ve: Context + Retrieve for Scholarly Question Answering0
Magnitude Pruning of Large Pretrained Transformer Models with a Mixture Gaussian Prior0
A Syntactic Approach to Domain-Specific Automatic Question Generation0
An Adaption of BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans.0
Contributions to the Improvement of Question Answering Systems in the Biomedical Domain0
Make Every Example Count: On the Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets0
HybGRAG: Hybrid Retrieval-Augmented Generation on Textual and Relational Knowledge Bases0
Show:102550
← PrevPage 241 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified