SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 201250 of 10817 papers

TitleStatusHype
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
Attention Is All You NeedCode3
LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMsCode2
video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language ModelsCode2
TableRAG: A Retrieval Augmented Generation Framework for Heterogeneous Document ReasoningCode2
CausalVQA: A Physically Grounded Causal Reasoning Benchmark for Video ModelsCode2
ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical ReasoningCode2
FlagEvalMM: A Flexible Framework for Comprehensive Multimodal Model EvaluationCode2
Reasoning-Table: Exploring Reinforcement Learning for Table ReasoningCode2
VAU-R1: Advancing Video Anomaly Understanding via Reinforcement Fine-TuningCode2
Large Language Models Meet Knowledge Graphs for Question Answering: Synthesis and OpportunitiesCode2
MASKSEARCH: A Universal Pre-Training Framework to Enhance Agentic Search CapabilityCode2
DoctorAgent-RL: A Multi-Agent Collaborative Reinforcement Learning System for Multi-Turn Clinical DialogueCode2
VTool-R1: VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool UseCode2
DanmakuTPPBench: A Multi-modal Benchmark for Temporal Point Process Modeling and UnderstandingCode2
SpatialScore: Towards Unified Evaluation for Multimodal Spatial UnderstandingCode2
Learnware of Language Models: Specialized Small Language Models Can Do BigCode2
Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert ReasonerCode2
EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement LearningCode2
UniversalRAG: Retrieval-Augmented Generation over Corpora of Diverse Modalities and GranularitiesCode2
FinBERT-QA: Financial Question Answering with pre-trained BERT Language ModelsCode2
TinyLLaVA-Video-R1: Towards Smaller LMMs for Video ReasoningCode2
MedM-VL: What Makes a Good Medical LVLM?Code2
FortisAVQA and MAVEN: a Benchmark Dataset and Debiasing Framework for Robust Multimodal ReasoningCode2
Unified Multimodal Discrete DiffusionCode2
Med3DVLM: An Efficient Vision-Language Model for 3D Medical Image AnalysisCode2
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
LLaVAction: evaluating and training multi-modal large language models for action recognitionCode2
Chain-of-Tools: Utilizing Massive Unseen Tools in the CoT Reasoning of Frozen Language ModelsCode2
Where do Large Vision-Language Models Look at when Answering Questions?Code2
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario UnderstandingCode2
Teaching LMMs for Image Quality Scoring and InterpretingCode2
A Multimodal Benchmark Dataset and Model for Crop Disease DiagnosisCode2
MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical ReasoningCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
AnyAnomaly: Zero-Shot Customizable Video Anomaly Detection with LVLMCode2
SemViQA: A Semantic Question Answering System for Vietnamese Information Fact-CheckingCode2
Streaming Video Question-Answering with In-context Video KV-Cache RetrievalCode2
LevelRAG: Enhancing Retrieval-Augmented Generation with Multi-hop Logic Planning over Rewriting Augmented SearchersCode2
Benchmarking Retrieval-Augmented Generation in Multi-Modal ContextsCode2
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language ModelsCode2
Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference OptimizationCode2
Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM Multi-Agent SystemsCode2
SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video UnderstandingCode2
KET-RAG: A Cost-Efficient Multi-Granular Indexing Framework for Graph-RAGCode2
ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference OptimizationCode2
LUCY: Linguistic Understanding and Control Yielding Early Stage of HerCode2
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement LearningCode2
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language ModelsCode2
EmbodiedEval: Evaluate Multimodal LLMs as Embodied AgentsCode2
Show:102550
← PrevPage 5 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified