SOTAVerified

Question Answering

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Papers

Showing 29012925 of 10817 papers

TitleStatusHype
ER-RAG: Enhance RAG with ER-Based Unified Modeling of Heterogeneous Data Sources0
CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering0
GlossGPT: GPT for Word Sense Disambiguation using Few-shot Chain-of-Thought PromptingCode0
AILS-NTUA at SemEval-2025 Task 8: Language-to-Code prompting and Error Fixing for Tabular Question AnsweringCode0
PreMind: Multi-Agent Video Understanding for Advanced Indexing of Presentation-style Videos0
TempRetriever: Fusion-based Temporal Dense Passage Retrieval for Time-Sensitive Questions0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval0
Fine-Grained Retrieval-Augmented Generation for Visual Question Answering0
Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning0
M-LLM Based Video Frame Selection for Efficient Video Understanding0
Can Large Language Models Unveil the Mysteries? An Exploration of Their Ability to Unlock Information in Complex Scenarios0
Protecting multimodal large language models against misleading visualizationsCode0
Bisecting K-Means in RAG for Enhancing Question-Answering Tasks Performance in Telecommunications0
Few-Shot Multilingual Open-Domain QA from 5 ExamplesCode0
From Retrieval to Generation: Comparing Different Approaches0
MEBench: Benchmarking Large Language Models for Cross-Document Multi-Entity Question Answering0
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents0
END: Early Noise Dropping for Efficient and Effective Context Denoising0
Time-MQA: Time Series Multi-Task Question Answering with Context Enhancement0
Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation0
Nexus: An Omni-Perceptive And -Interactive Model for Language, Audio, And Vision0
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning0
KiRAG: Knowledge-Driven Iterative Retriever for Enhancing Retrieval-Augmented Generation0
Tip of the Tongue Query Elicitation for Simulated EvaluationCode0
Show:102550
← PrevPage 117 of 433Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1IE-Net (ensemble)EM90.94Unverified
2FPNet (ensemble)EM90.87Unverified
3IE-NetV2 (ensemble)EM90.86Unverified
4SA-Net on Albert (ensemble)EM90.72Unverified
5SA-Net-V2 (ensemble)EM90.68Unverified
6FPNet (ensemble)EM90.6Unverified
7Retro-Reader (ensemble)EM90.58Unverified
8EntitySpanFocusV2 (ensemble)EM90.52Unverified
9TransNets + SFVerifier + SFEnsembler (ensemble)EM90.49Unverified
10EntitySpanFocus+AT (ensemble)EM90.45Unverified