SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 151200 of 10817 papers

TitleStatusHype
VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD SoftwareCode1
Exploring the Impact of Occupational Personas on Domain-Specific QA0
Grid-LOGAT: Grid Based Local and Global Area Transcription for Video Question Answering0
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models0
A Simple Linear Patch Revives Layer-Pruned Large Language Models0
Vision LLMs Are Bad at Hierarchical Visual Understanding, and LLMs Are the Bottleneck0
Drop Dropout on Single-Epoch Language Model PretrainingCode0
LGAR: Zero-Shot LLM-Guided Neural Ranking for Abstract Screening in Systematic Literature ReviewsCode0
Improving Reliability and Explainability of Medical Question Answering through Atomic Fact Checking in Retrieval-Augmented LLMs0
Revisiting Epistemic Markers in Confidence Estimation: Can Markers Accurately Reflect Large Language Models' Uncertainty?Code0
Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation0
mRAG: Elucidating the Design Space of Multi-modal Retrieval-Augmented Generation0
TCM-Ladder: A Benchmark for Multimodal Question Answering on Traditional Chinese Medicine0
MedPAIR: Measuring Physicians and AI Relevance Alignment in Medical Question Answering0
VF-Eval: Evaluating Multimodal LLMs for Generating Feedback on AIGC VideosCode0
Fortune: Formula-Driven Reinforcement Learning for Symbolic Table Reasoning in Language Models0
Impromptu VLA: Open Weights and Open Data for Driving Vision-Language-Action ModelsCode3
Diagnosing and Addressing Pitfalls in KG-RAG Datasets: Toward More Reliable Benchmarking0
VAU-R1: Advancing Video Anomaly Understanding via Reinforcement Fine-TuningCode2
Data-efficient Meta-models for Evaluation of Context-based Questions and Answers in LLMs0
From Chat Logs to Collective Insights: Aggregative Question Answering0
ChartMind: A Comprehensive Benchmark for Complex Real-world Multimodal Chart Question Answering0
Puzzled by Puzzles: When Vision-Language Models Can't Take a HintCode1
Let's Reason Formally: Natural-Formal Hybrid Reasoning Enhances LLM's Math Capability0
QLIP: A Dynamic Quadtree Vision Prior Enhances MLLM Performance Without RetrainingCode0
Differential Information: An Information-Theoretic Perspective on Preference Optimization0
Spoken question answering for visual queries0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
KVzip: Query-Agnostic KV Cache Compression with Context ReconstructionCode3
Synthetic Document Question Answering in HungarianCode0
3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model0
EvolveSearch: An Iterative Self-Evolving Search Agent0
Enhancing Paraphrase Type Generation: The Impact of DPO and RLHF Evaluated with Human-Ranked DataCode0
Read Your Own Mind: Reasoning Helps Surface Self-Confidence Signals in LLMs0
Improving QA Efficiency with DistilBERT: Fine-Tuning and Inference on mobile Intel CPUs0
Climate Finance BenchCode0
ER-REASON: A Benchmark Dataset for LLM-Based Clinical Reasoning in the Emergency Room0
Structured Memory Mechanisms for Stable Context Representation in Large Language Models0
NegVQA: Can Vision Language Models Understand Negation?0
StressTest: Can YOUR Speech LM Handle the Stress?0
VIGNETTE: Socially Grounded Bias Evaluation for Vision-Language ModelsCode0
Agent-UniRAG: A Trainable Open-Source LLM Agent Framework for Unified Retrieval-Augmented Generation Systems0
DriveRX: A Vision-Language Reasoning Model for Cross-Task Autonomous Driving0
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question AnsweringCode0
Rethinking Information Synthesis in Multimodal Question Answering A Multi-Agent Perspective0
DynamicVL: Benchmarking Multimodal Large Language Models for Dynamic City Understanding0
Silence is Not Consensus: Disrupting Agreement Bias in Multi-Agent LLMs via Catfish Agent for Clinical Decision Making0
Understand, Think, and Answer: Advancing Visual Reasoning with Large Multimodal Models0
SOSBENCH: Benchmarking Safety Alignment on Scientific Knowledge0
Show:102550
← PrevPage 4 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified