SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 29012950 of 10817 papers

TitleStatusHype
ER-RAG: Enhance RAG with ER-Based Unified Modeling of Heterogeneous Data Sources0
CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering0
AILS-NTUA at SemEval-2025 Task 8: Language-to-Code prompting and Error Fixing for Tabular Question AnsweringCode0
GlossGPT: GPT for Word Sense Disambiguation using Few-shot Chain-of-Thought PromptingCode0
PreMind: Multi-Agent Video Understanding for Advanced Indexing of Presentation-style Videos0
TempRetriever: Fusion-based Temporal Dense Passage Retrieval for Time-Sensitive Questions0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Fine-Grained Retrieval-Augmented Generation for Visual Question Answering0
WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval0
Can Large Language Models Unveil the Mysteries? An Exploration of Their Ability to Unlock Information in Complex Scenarios0
Bisecting K-Means in RAG for Enhancing Question-Answering Tasks Performance in Telecommunications0
M-LLM Based Video Frame Selection for Efficient Video Understanding0
Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning0
From Retrieval to Generation: Comparing Different Approaches0
Few-Shot Multilingual Open-Domain QA from 5 ExamplesCode0
Protecting multimodal large language models against misleading visualizationsCode0
Nexus: An Omni-Perceptive And -Interactive Model for Language, Audio, And Vision0
MEBench: Benchmarking Large Language Models for Cross-Document Multi-Entity Question Answering0
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning0
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents0
Time-MQA: Time Series Multi-Task Question Answering with Context Enhancement0
Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation0
END: Early Noise Dropping for Efficient and Effective Context Denoising0
Uncertainty Quantification in Retrieval Augmented Question AnsweringCode0
Tip of the Tongue Query Elicitation for Simulated EvaluationCode0
Say Less, Mean More: Leveraging Pragmatics in Retrieval-Augmented Generation0
Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based InferenceCode0
FilterRAG: Zero-Shot Informed Retrieval-Augmented Generation to Mitigate Hallucinations in VQA0
SECURA: Sigmoid-Enhanced CUR Decomposition with Uninterrupted Retention and Low-Rank Adaptation in Large Language Models0
KiRAG: Knowledge-Driven Iterative Retriever for Enhancing Retrieval-Augmented Generation0
All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark0
Evaluating the Effect of Retrieval Augmentation on Social Biases0
AAD-LLM: Neural Attention-Driven Auditory Scene Understanding0
MULTITAT: Benchmarking Multilingual Table-and-Text Question AnsweringCode0
MultiOCR-QA: Dataset for Evaluating Robustness of LLMs in Question Answering on Multilingual OCR TextsCode0
Retrieval-Augmented Visual Question Answering via Built-in Autoregressive Search Engines0
Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images0
Visual-RAG: Benchmarking Text-to-Image Retrieval Augmented Generation for Visual Knowledge Intensive QueriesCode0
MQADet: A Plug-and-Play Paradigm for Enhancing Open-Vocabulary Object Detection via Multimodal Question Answering0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
EPERM: An Evidence Path Enhanced Reasoning Model for Knowledge Graph Question and Answering0
Echo: A Large Language Model with Temporal Episodic Memory0
MHQA: A Diverse, Knowledge Intensive Mental Health Question Answering Challenge for Language Models0
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?0
Chats-Grid: An Iterative Retrieval Q&A Optimization Scheme Leveraging Large Model and Retrieval Enhancement Generation in smart grid0
Empowering LLMs with Logical Reasoning: A Comprehensive Survey0
TransMamba: Fast Universal Architecture Adaption from Transformers to Mamba0
Improving Consistency in Large Language Models through Chain of GuidanceCode0
Directional Gradient Projection for Robust Fine-Tuning of Foundation Models0
Show:102550
← PrevPage 59 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified